Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 May 28.
Published in final edited form as: Sch Psychol Forum Res Pract. 2018 Spring;12(1):17–29.

Adoption Costs Associated With Processing Strengths and Weaknesses Methods for Learning Disabilities Identification

Jacob Williams 1, Jeremy Miciak 2
PMCID: PMC6537899  NIHMSID: NIHMS995903  PMID: 31149322

Abstract

There is controversy regarding the relative merits of cognitive assessment for the identification of learning disabilities. Proponents of cognitive assessment have suggested that multitiered systems of support (MTSS) should be supplemented with routine, systematic assessment of cognitive processes following a determination of inadequate response to evidence-based interventions in order to document a pattern of processing strengths and weaknesses (PSW methods) as an inclusionary criterion for learning disabilities. However, the financial costs incurred by this addition to MTSS are not well known. In the present study, we present a systematic case study to estimate the costs associated with adopting routine assessment of cognitive processing for students referred for special education evaluation. We estimate that implementation within a district would cost between $1,960 and $2,400 per student, assuming no existing infrastructure. These expenses are discussed in relation to evidence for the educational value of such assessments and inherent trade-offs between assessment and intervention.


The field of learning disabilities (LD) research and practice has been described as “having a checkered history … littered with contention, false starts, fads, dead ends, and pseudoscience …” (Stanovich, 1989, p. 487). One historical source of uncertainty and controversy is how best to define and identify individuals with LD. From 1977 to 2004, federal regulations in the United States specifically mandated that a cognitive discrepancy framework be utilized for the identification of LD. This framework hypothesized that LD was marked by an aptitude–achievement discrepancy, which differentiated it from low achievement not due to LD. These cognitive discrepancy frameworks were typically operationalized through assessment methods to document a discrepancy between a student’s overall cognitive ability (as indexed by a measure of IQ) and the student’s performance in a specific academic domain. Various procedures were proposed to establish this cognitive discrepancy as an inclusionary criterion for LD identification, including the use of simple difference scores and more complex regression approaches (Fletcher, Lyon, Fuchs, & Barnes, 2007; Kavale, 2002) These methods persisted in federal law and in practice for over a quarter century despite considerable criticism (Bradley, Danielson, & Hallahan, 2002). These criticisms were based on research demonstrating the unreliability of discrepancy methods for individual LD identification decisions (e.g., Francis et al., 2005; Macmann, Barnett, Lombard, Belton-Kocher, & Sharpe, 1989), a problem inherent to any identification method that relies on applying strict cut points to imperfect psychometric data, including low achievement criteria and methods premised on inadequate response to instruction (Fletcher et al., 2014; Francis et al., 2005). Additionally, IQ–achievement discrepancy methods demonstrate poor validity, as documented in empirical research. Students with academic deficits with and without an IQ–achievement discrepancy demonstrate similar academic, cognitive, and behavioral performance (Hoskyn & Swanson, 2000; Stuebing et al., 2002). Functional brain imaging studies have also failed to identify differences in the brain activation patterns of children with and without an IQ– achievement discrepancy (Simos, Rezaie, Fletcher, & Papanicolaou, 2013; Tanaka et al., 2011). Finally, recent research demonstrates that IQ is a poor predictor of intervention response (Stuebing, Barth, Molfese, Weiss, & Fletcher, 2009; Vellutino, Scanlon, & Lyon, 2000).

At present, proponents of cognitive assessment frameworks for LD definition and identification have shifted away from advocacy for methods utilizing a simple discrepancy between IQ and academic achievement toward more complex and robust assessment practices aiming to identify an intraindividual pattern of cognitive processing strengths and weaknesses (PSW; Flanagan, Fiorello, & Ortiz, 2010; Hale, Alfonso, et al., 2010; Schneider & Kaufman, 2017). Proponents assert that PSW methods can be used to identify specific academic and processing deficits within students, explain etiology, and help plan for subsequent treatment(s). Notably, proponents of routine assessment of cognitive processes as part of the LD identification process advocate that this assessment occur following a determination of inadequate response to evidence-based interventions provided in a multitiered system of supports (MTSS; Fletcher- Janzen & Reynolds, 2009; Hale, Alfonso, et al., 2010; Hale, Wycoff, & Fiorello, 2011). Thus, PSW proponents advocate for cognitive assessment in addition to utilizing MTSS. The value of the addition of cognitive assessment should be evaluated in light of its costs and benefits. There is active debate regarding the benefits of PSW methods (see, for example, Fletcher & Miciak, 2017 and Schneider & Kaufman, 2017 for a point–counterpoint debate about the merits of these methods). However, the specific associated costs incurred by a district wishing to adopt PSW methods for the identification of LD are not well established. In the present study, we directly evaluate this issue through a case study of an exemplar district.

Learning Disabilities Identification and the Role of Cognitive Assessment

There is general agreement about the minimum data that are necessary for an LD identification decision (Bradley et al., 2002; Fletcher et al., 2007). These data must provide evidence of (a) a specific academic deficit(s) or impairment(s); (b) a failure to make adequate progress despite the adequacy of previous instruction; and (c) a consideration of important exclusionary clauses (e.g., sensory disorders or academic deficits due to limited language exposure). Many argue that these data are most efficiently collected through a school-wide MTSS, also called a response-to-intervention framework. Broadly understood, MTSSs are school-wide systems that include universal screening, tiered instruction of increasing intensity for students with more severe academic deficits, and ongoing progress monitoring of student performance in response to intervention (Fletcher & Vaughn, 2009; Fuchs, Mock, Morgan, & Young, 2003; Jimerson, Burns, & VanDerHeyden, 2007). Although there is general agreement on the essential characteristics of MTSSs, there are numerous uncertainties and controversies about their implementation, particularly regarding the collection and characteristics of essential data for LD identification (Fuchs & Deshler, 2007; Hale, Alfonso, et al., 2010). Despite these uncertainties in the implementation of MTSS, most proponents of PSW methods recognize the system’s role in screening and preventing academic difficulty and propose a hybrid LD identification system that relies on data collected from MTSS and an assessment of cognitive processing following a determination of inadequate response (Fletcher-Janzen & Reynolds, 2009; Hale, Alfonso, et al., 2010; Johnson, 2014). These hybrid PSW methods differ in two fundamental ways from the LD identification procedures that would be utilized in a method based on data generated solely within an MTSS. First, PSW proponents argue that a comprehensive assessment for LD identification must include the routine assessment of cognitive processes. Second, PSW proponents argue that positive LD identification decisions require the identification of an intraindividual PSW pattern as an inclusionary criterion (Hale, Alfonso, et al., 2010; Schneider & Kaufman, 2017).

PSW methods for the identification of LD reflect an underlying classification hypothesis that discrepancies in cognitive skills and associated academic deficits are a marker of LD (Miciak, Taylor, Denton, & Fletcher 2015) and align with the historical conceptualization of LD as a specific weakness in a sea of strengths (Shaywitz, 2003). Although different methods to operationalize a PSW framework have been offered (e.g., Flanagan, Ortiz, & Alfonso, 2016; Hale & Fiorello, 2004; Naglieri & Das, 1997), there are consistencies across methods. First, these methods de-emphasize IQ as a holistic indicator of aptitude and focus instead on interpreting an intraindividual pattern of factor or index level scores across a battery of tests to determine cognitive strengths and weaknesses (McGill & Busse, 2017). Second, these methods hypothesize that intraindividual variability in these cognitive processes is a marker for LD, can explain the cause(s) of academic difficulties, and can assist in planning subsequent interventions. Finally, and not inconsequentially, all of these methods require the administration of multiple assessments of cognitive processing—a time- and labor-intensive process.

Monetary Implications of LD Identification

Among those who argue against the necessity of the routine assessment of cognitive processing, many have referenced the high cost of administration as an additional factor that contraindicates their use in LD identification (Fletcher, Coulter, Reschly, & Vaughn, 2004; Glutting, Watkins, & Youngstrom, 2003; MacMillan & Siperstein, 2002; Miciak, Taylor, Stuebing, & Fletcher, 2017). When implementing a PSW model, states and districts must consider the cost of implementing an MTSS framework and the additional cost associated with the administration of cognitive assessments. Literature reviewing LD evaluations utilizing an IQ–achievement discrepancy method has estimated the cost of evaluations could range from $800 to $8,000 (President’s Commission on Excellence in Special Education, 2002). Other, more specific estimations have stated the cost of LD identification utilizing an IQ achievement discrepancy method at approximately $2,500 or $3,000 per student (Gresham, 2002; VanDerHeyden, Witt, & Gilbertson, 2007). Specific to PSW methods, Glutting et al. (2003) estimated that widespread implementation of PSW identification methods would add between $27.1 and $100 million in additional psychoeducational assessment costs across the United States.

METHODS

In this study, we generate an estimate of the possible costs for a school district to adopt a cognitive assessment process associated with a PSW LD identification model for a suspected reading disability. Providing estimates of these costs offers decision makers in educational settings additional information to inform policy and practice decisions. When generating an estimate of the potential costs for a PSW evaluation, costs for materials, labor, and training must be considered. However, costs may vary greatly between different districts and/or schools depending on the level of training and current assessment infrastructure. For example, some districts may have school psychologists who are familiar with PSW methods and may have some of the assessments. In some contexts, the district may allow school psychologists considerable latitude in their own decisions and procedures, and therefore practices may vary by school (or even within schools). Estimating true, universal costs associated with PSW adoption is an impossible task. As such, we made the a priori decision to calculate the costs associated with the adoption of a PSW evaluation process for a district with no existing infrastructure in place.

Assumptions

The present study investigates the costs incurred by a district to adopt and implement a common, explicitly formulated PSW method over the course of a school year. This analysis relies on three stipulations:

  1. PSW method. Our analysis is based on the adoption of the dual discrepancy/consistency model (DD/C; Flanagan, Ortiz, & Alfonso, 2013) and calculates costs for an assessment for LD in reading, the most common academic area in which students qualify for special education services. The DD/C model is the most clearly defined PSW model, and it includes an explicit assessment protocol (Flanagan, Ortiz, & Alfonso, 2013). The DD/C model is the only PSW method to offer a comprehensive implementation training (see https://www.schoolneuropsych.com/xba/index. php?id=938). Additionally, in the few districts that have adopted explicit PSW criteria for the identification of LD, those criteria mirror the recommendations of the DD/C (see, for example, Portland Public Schools, 2013).

  2. District status. The case study district has no current infrastructure for implementation (other than currently employed school psychologists or licensed diagnosticians). Within the district, each school has a dedicated school psychologist or licensed diagnostician, and each of these professionals are salaried at the U.S. average for their positions.

  3. Assessment purchasing. We calculated the number of assessments and administration licenses to be purchased based upon a number that would generate the least amount of cost for the district. Administration licenses were purchased to ensure the district had infrastructure in each of its school buildings to administer a necessary assessment to each student if an evaluation was needed. It is assumed that the district has the infrastructure to administer an assessment electronically if this option was available from a publisher. Assessment prices were generated using publicly available pricing on publisher websites (see Table 3).

Table 3.

Assessment Purchase Costs

Assessments District Cost Method for Cost Calculation
WISC-IV, WIAT-III, NEPSY-II $8,098.00 31 licenses (1 for each school) at $175.00 per license
14 subtests administered for WISC, NEPSY ($1.50
each) and 5 for WIAT ($0.75), purchased per subtest
(175 × 31) + ((14 × 1.5) + (5 × 0.75)) × 108a
WJ-IV $61,739.60 One complete kit with 25 administration forms at a
cost $1991.60, purchased for each siteb
Test of Orthographic Competence
(TOC)
$9,455.00 One complete kit with 25 response booklets for each
age level at a cost of $305.00, purchased for each
sitec
WJ-III Diagnostic Supplement $19,091.35 One complete kit with 25 administration forms at a
cost of $615.85, purchased for each sited
Early Reading Assessment (ERA) $8,525.00 One complete kit with 25 administration forms at a
cost of $275.00, purchased for each sitee
KABC-II $29,791.00 One complete kit with 25 administration forms at a
cost of $963.05, purchased for each sitef
DAS-II $39,525.00 Complete kit with 15 administration forms at a cost
of $1275.00, purchased for each siteg
Total assessment purchase cost $176,224.95

Note. WISC-IV = Wechsler Intelligence Scale for Children–Fourth Edition; WIAT-III = Wechsler Individual Achievement Test–Third Edition; NEPSY-II = NEPSY–Second Edition; WJ-IV = Woodcock-Johnson IV; WJ-III = Woodcock-Johnson III; KABC-II = Kaufman Assessment Battery for Children–Second Edition; DAS-II = Differential Ability Scales–Second Edition.

a

Retrieved from http://www.helloq.com/pricing/Price-Options.html on October 25, 2016

Parameters for Cost Estimate

Calculations of costs for the case study district are based on reported Child Find data for a medium- sized district in Oregon containing 31 total elementary, middle, and high schools. The district contained 17,401 students across grades K–12, and 50% of the students were female. The student population was predominantly White (69%), with the largest minority student group being Hispanic/Latino (15%). With a population of over 17,000 students, this district ranked in the largest 5% of districts in the United States in terms of student population (Gray, Bitterman, & Goldring, 2013). We chose a district in Oregon because Oregon schools have been at the forefront of adoption of PSW methods, and Oregon is one of only two states that explicitly allows for the use of PSW methods for LD identification (Maki, Floyd, & Roberson, 2015).

Assessment Numbers

Over the course of a school year, the district completed 108 evaluations for reading disabilities. The number of evaluations conducted is equivalent to approximately 30% of all evaluations for special education conducted in the district over the reviewed calendar year. This is consistent with other medium-to-large districts in a similar geographic area. All assessment training and purchasing expenses were determined based upon publishers’ pricing rates listed on product websites.

RESULTS

We calculated labor costs based upon a school psychologist’s rate of $45 per hour. In 2014, the median salary for school psychologists in the United States was $68,900 (U.S. News and World Report, 2016). While varied, school psychologists’ work periods are similar to a school year, which is typically around 190 days (American Psychological Association, 2005). Using the mean salary, a period of 190 days, and 40 work hours a week, it can be calculated that an hour worked by a school psychologist costs approximately $45.

Flanagan et al. (2013) acknowledge that the DD/C identification is often criticized as complex but that the perceived “complexity” rests “with insufficient or inadequate graduate training” (p. 362) related to the theory and psychometrics that underlie the procedure. However, they put forth that “no other available methods provide the kind of guidance found in DD/C for making sense of collected data and creating defensible interpretations related to an individual’s test performances” and that experience with the method “will be a powerful aid in developing the requisite competency and skill” (p. 362). Therefore, a district initiating a DD/C identification model would require training for all school psychologists in the DD/C method.

The cost per psychologist for our district to provide 31 school psychologists an online DD/C certification training would be $600. The training offers 21 hours of continuing education credits, which for the purposes of our calculation will be classified as hourly labor costs. This online training is described as a webinar-based program “designed for assessment specialists who want to gain proficiency in using cross-battery assessment. This webinar-based program is taught by the leading experts in XBA, including Drs. Dawn P. Flanagan, Samuel O. Ortiz, and Vincent C. Alfonso. … This continuing education program is designed to provide assessment specialists with the theoretical knowledge of cross-battery assessment and learn how to competently use all of the features of the X-BASS software system” (https:// www.schoolneuropsych.com/xba/index.php?id=938). The training is available virtually, so no travel costs would accrue. The following formula was used to calculate the total cost for training registration and hourly labor: ($600 × 31) + ($45 × 21 × 31). The total calculated cost for the online training registration and labor was $47,895.

An alternative to the online training for the district may be to hire a consultant DD/C expert to provide onsite training for school psychologists. If we assume this training would occur over two 8-hour days with each of the 31 school psychologists in attendance, that would result in labor costs of $11,160 ($45 × 8 × 31). Estimating an additional cost for the trainer of $4,000, which includes the trainer’s daily rate and travel expenses, the total training cost would be $26,320 versus $47,895 for the more extensive online training.

An individual student evaluation includes labor costs for the time required for school psychologists to prepare and administer assessments and to organize and report assessment results. Evaluations of how much time school psychologists devote simply to administering assessments during a special education evaluation estimate that the average time ranges from 45 minutes (Glutting et al., 2003) to 3.5 hours (Licciardello, 2002; Styron, 2003). This would add a cost ranging from $3,645 to $17,010 for the case study district to conduct 108 evaluations. Additionally, school psychologists report that preparation for administering, analyzing, organizing, and reporting data for each evaluation requires approximately one hour per evaluation (Licciardello, 2002; Styron, 2003). This would add an additional cost of $4,860 across the district’s 108 evaluations (i.e., $45 × 1 × 108). These two costs combined with training costs result in a range of possible total labor costs for the district of $34,545 to $69,765 (see Table 1).

Table 1.

Estimated Labor Costs for PSW Adoption

High Estimate District Cost Low Estimate District Cost
DD/C training registration costs ($600 × 31
school psychologists)
$18,600.00 N/A
DD/C training hours (21 hours × $45 × 31
school psychologists)
$29,295.00 Onsite training $26,320.00
Cost to administer assessments (3.5 hours/
evaluation)
$17,010.00 Cost to administer
assessments (45 minutes)
$3,645.00
Time to organize assessments and develop
report (1 hour/evaluation)
$4,860.00 Time to organize
assessments and develop
report (1 hour/evaluation)
$4,860.00
Total labor/training costs $69,765.00 Total labor/training costs $35,545.00

Note. Calculations based upon an hourly labor rate of $45 and 31 school psychologists conducting 108 evaluations. PSW = patterns of strengths and weaknesses; DD/C = dual discrepancy/consistency.

Assessment Costs

When conducting a DD/C evaluation for reading, a practitioner is instructed to assess nine broad abilities that are relevant to understanding reading (Flanagan et al., 2013). Authors of the DD/C model state that, to form a reliable and valid estimate of broad and narrow abilities, at least two qualitatively different indicators are required. (Flanagan et al., 2013). Therefore, if a district wished to implement a DD/C model utilizing the Wechsler Intelligence Scale for Children–Fourth Edition (WISC-IV) and Wechsler Individual Achievement Test–Third Edition (WIAT-III) protocol, the district would need to have on hand a minimum of nine different standardized tests to achieve these criteria (Flanagan et al., 2013, p. 87; see Table 2).

Table 2.

Recommended Assessments for DD/C Evaluations

Broad Ability Narrow Ability Initial Subtests Follow-Up Subtests
Gf I-Induction WISC-IV matrix reasoning WIAT-III reading
comprehension
Gc RG-Deduction
LS-Listening ability
WJ-IV-COG analysis synthesis
WIAT-III listening
comprehension
KABC-II story completion
WJ-IV-ACH oral
comprehension
KO-General information
VL-Lexical knowledge
WISC-IV information
WISC-IV vocabulary
WISC-IV comprehension
WISC-IV similarities; word
reasoning
Gsm MS-Memory Span WISC-IV digit span forward WJ-IV-COG memory for
words
MW-Working memory WISC-IV letter-number
sequencing
WISC-IV digit span
backward
Gv MV-Visual memory WJ-IV-COG picture recognition DAS-II recognition of
pictures
Orthographic processing Test of Orthographic
Competence (TOC)
Early Reading Assessment
(ERA)
Ga PC-Phonetic coding
U-Speech-sound
discrimination
WIAT-III early reading skills
WJ-III diagnostic supplement
sound patterns voice
WJ-IV-COG sound blending
WJ-III-DS sound patterns
music
Glr NA-Rapid naming WJ-IV-COG rapid picture
naming
NEPSY-II speeded naming
MA-Associative memory WJ-IV-COG visual auditory
learning
WJ-III-DS memory for
names
M6-Free recall memory
MM-Meaningful memory
NEPSY-II list memory
WJ-IV-ACH story recall
DAS-II recall of objects
WJ-IV-ACH story recall
delayed
GS RS-Reading speed
P-Perceptual speed
WIAT-III oral reading fluency
WISC-IV symbol search
WIAT-III reading fluency
WISC-IV cancellation
Attention WJ-IV-COG attention clinical
cluster
NEPSY-II auditory attention
and response
Executive
functions
WJ-IV-COG executive
processing cluster
NEPSY-II animal sorting

Note. DD/C = dual discrepancy/consistency; Gf = Fluid reasoning; WISC-IV = Wechsler Intelligence Scale for Children–Fourth Edition; WIAT-III = Wechsler Individual Achievement Test–Third Edition; Gc = Comprehension-knowledge; WJ-IV-COG = Woodcock-Johnson IV Cognitive Test; KABC-II = Kaufman Assessment Battery for Children–Second Edition; WJ-IV-ACH = Woodcock-Johnson IV Tests of Achievement; Gsm = short term memory; Gv = Visual processing; DAS-II = Differential Ability Scales–Second Edition; Ga = auditory processing; Glr = long-term storage and retrieval; WJ-III-DS = Woodcock-Johnson III Diagnostic Supplement; NEPSY-II = NEPSY–Second Edition; GS = Processing speed.

Many purchase licenses allow for a specific number of administrations per assessment. Purchase calculations were made assuming a district was prepared to administer any assessment at any of the school sites during each of the 108 evaluations. With these parameters, the total purchase costs for these assessments would be $176,224.95 (see Table 3).

Total Costs

Total costs for the district to adopt and implement the DD/C PSW LD identification method would range from $211,769.00 to $245,989.95 (labor costs + assessment purchase costs). At the low end, this would equal $1,960.82 per assessment completed in a single school year to identify potential LD in reading and $2,406.66 per assessment at the high end (i.e., $259,918.95/108).

DISCUSSION

The goal of this study was to estimate the total cost incurred by a district that seeks to implement PSW methods for the identification of LDs. As the costs associated with implementing PSW vary widely across districts with different capacities, we chose to evaluate the costs to adopt these methods assuming zero previous assessment infrastructure. Additionally, we calculated costs for a mid-sized district in Oregon, a state in which PSW methods have achieved considerable purchase. Finally, our calculations build on the recommendations of PSW advocates (Fletcher-Janzen & Reynolds, 2009; Hale, Alfonso, et al., 2010; Hale, Wycoff, & Fiorello, 2011), who contend that PSW methods should be implemented as part of a comprehensive assessment following a determination of inadequate response to intensive interventions. The calculated costs are incurred in addition to all costs associated with implementation of a full MTSS and any subsequent intervention costs. Thus, the critical consideration in evaluating the substantial price tag associated with PSW methods is to weigh their value-added. To wit, what evidence is there that the addition of PSW assessment practices helps students learn to read, write, or perform math calculations well?

On this, there is a concerning lack of evidence. Reviewing the state of the professional literature arguing for the necessity of cognitive processing assessment utilized in this way, Schneider and Kaufman (2017) colorfully conclude, “After rereading dozens of papers defending such assertions, including our own, we can say that this position is mostly backed by rhetoric in which assertions are backed by citations of other scholars making assertions backed by citations of still other scholars making assertions” (p. 8). Unfortunately, assertions are not sufficient evidence for important decisions in underfunded schools. When considering such assertions, it is important that decision makers consider empirical evidence in peer-reviewed journals, as this represents the process of validation through empirical research that defines evidence-based practices in school psychology and separates it from pseudoscience (Kratochwill, 2007; Kratochwill & Shernoff, 2004).

The implicit hypothesis underlying PSW methods is that children with a PSW profile need something different and that a careful inspection of a student’s cognitive profile would improve intervention planning. In a recent study of a large sample of struggling fourth graders, we attempted to find evidence for this assertion (Miciak, Williams, et al., 2016). If students with a PSW profile truly need something different from other struggling students who do not have a PSW profile, there should be evidence for this need in how they respond to an intensive, standardized intervention in reading. We found no evidence that this was the case. Instead, students both with and without a PSW profile responded the same when we accounted for their previous reading performance. This finding is consistent with a recent meta-analysis concluding that cognitive performance prior to intervention explained very little variation in how students responded to intervention once prior academic performance was known (Stuebing et al., 2009).

Such findings should not surprise school psychologists who are familiar with the long history of null results associated with experiments investigating aptitude-by-treatment interactions (ATI; Kearns & Fuchs, 2013; Pashler, McDaniel, Rohrer, & Bjork, 2009). In a recent meta-analysis, Burns et al. (2016) investigated the effects of using neuropsychological data for intervention screening and design. Consistent with previous meta-analyses, results revealed small effects for interventions utilizing neuropsychological data, much smaller than effects for interventions aimed at skill-by-treatment interactions, which plan and deliver interventions based on previous academic achievement. Burns and colleagues concluded there is little evidence to justify the additional costs associated with cognitive assessment and that districts should focus limited funds on direct measures of academic skills. The high costs of PSW assessment documented in the present study further informs the discussion of trade-offs and value.

The Inherent Trade-Offs of Increased Assessment

Districts that consider adopting PSW methods must carefully weigh the trade-offs associated with the additional costs of implementation (Fletcher & Miciak, 2017; Gresham & Witt, 1997). For example, our high-end estimate of $259,918.95 in total cost could be reallocated to fund intervention. To illustrate, Sound Partners© is a phonics-based reading program that in high-quality studies has a demonstrated effect size of 0.8 (National Center for Intensive Intervention, 2016). The approximate implementation cost for Sound Partners© per student is $800 (National Center for Intensive Intervention, 2016). If reallocated, $259,918.95 could have funded the purchase of the intervention over the course of a year for 324 students, exactly three times as many students as could be assessed via PSW methods under our high-end estimates. Stated another way, even if the high-end estimate of the present study exceeds the actual adoption costs for the district by a factor of three, it still costs approximately as much as a year of intervention for those same students.

Additionally, one must consider the opportunity cost associated with assessment. Previous studies have found that school psychologists’ time is dominated by assessment, including evaluations for the identification of LD (Falotico, 2015; Gresham & Witt, 1997). The lack of evidence for the value of cognitive assessment data would suggest that this time distribution is an ineffective use of a school psychologist’s time that produces little educational benefit for the student (Burns et al., 2016; Kearns & Fuchs, 2013; Pashler et al., 2009) except in limited circumstances when an intellectual disability is suspected or prior to the initiation of formal instruction (e.g., phonological awareness assessments in kindergarten; Fletcher & Miciak, 2017). Further, this time distribution is contrary to school psychologists’ preference to spend more time assisting with intervention (Brown, Holcombe, Bolen, & Thomson, 2006; Filter, Ebsen, & Dibos, 2013). The time spent by the school psychologists administering assessments in a PSW LD evaluation is time not spent leveraging his or her expertise to develop, inform, monitor, or improve student instruction.

Finally, one must consider the impact of this assessment process and decision framework on individual students. Already struggling students are required to miss important instructional time to take cognitive assessments so that school psychologists can develop hypotheses about how the students learn and document evidence for LD. While these goals are inarguably important, we would suggest that such hypotheses could be formed just as readily, and perhaps would demonstrate greater contextual relevance, if such hypotheses were developed by assessing how well the student is learning what is taught and by observing his or behavior in instructional situations. Additionally, while proponents of PSW methods place considerable emphasis on the educational needs of students who are struggling and who demonstrate a PSW profile, there is comparatively little consideration for those who do not have a PSW profile but are experiencing the same academic difficulties.

Limitations

The calculations in this study represent estimates. The complex nature of DD/C and the variation in how it is implemented present challenges for calculating exact costs (see Glutting et al., 2003, p. 365, for discussion). For example, salaries and the amount of training and support necessary to implement DD/C for school psychologists will vary greatly from district to district. Additionally, our analyses assumed that the district demonstrated no existing infrastructure to implement a DD/C method. This likely inflated our cost estimate, as most school districts in the United States would possess some, if not all, of the necessary cognitive assessments and expertise. This represents a significant limitation of our study, as assessment purchase costs accounted for a large portion of the total cost estimate. Another source of uncertainty in our estimate concerns the number of students who are assessed each year. We utilized Child Find numbers for students assessed for a potential LD in reading, resulting in 108 total referrals in a year (approximately 3.5 per year). This number is likely low and actual assessment costs would be higher, but we preferred to base these calculations on reported numbers rather than anecdotal evidence from our own experience in schools. As a result, readers should consider the costs estimated in this article in relation to their known context and differentiate across cost categories where appropriate.

CONCLUSIONS

In a time of ever-decreasing educational spending (Leachman & Mai, 2014), states and districts must ensure that available funds are directed to programs that best improve the academic and health outcomes of their students. In this study, we evaluated the costs associated with the adoption of PSW methods to identify LD following a determination of inadequate response. We estimated that this process would cost a large school district (more than 10,000 students) approximately $2,000 per student evaluated. Considering the limited rigorous evidence that these methods improve instruction and intervention, we conclude that such costs are not currently justifiable. Instead, we suggest that financially prudent educational agencies should focus on funding programs with proven evidence for effectiveness. School psychologists should play a key role in helping districts and schools choose effective programs, evaluating student progress within these programs, and suggesting modifications and accommodations that will help all students become more successful learners.

Biographies

AUTHOR BIOGRAPHICAL STATEMENTS

Dr. Jeremy Miciak is an Assistant Research Professor at the University of Houston and affiliated faculty with the Texas Institute for Measurement, Evaluation, and Statistics. He conducts research related to the identification, characteristics, and treatment of learning disabilities and learning difficulties, with a special emphasis on learning difficulties among students from diverse cultural and linguistic backgrounds.

Dr. Jacob Williams is a senior advisor at Education Northwest. He conducts research with students with learning disabilities or at-risk for learning difficulties and provides technical assistance to states, districts, and schools in the Pacific Northwest in the areas of policy and practice.

Contributor Information

Jacob Williams, Education Northwest.

Jeremy Miciak, University of Houston.

REFERENCES

  1. American Psychological Association. (2005). Postgrad growth area: School psychology. gradPSYCH Magazine, 3, Retrieved from https://www.apa.org/gradpsych/2005/01/schoolpsych.aspx [Google Scholar]
  2. Bradley R, Danielson L, & Hallahan DP (Eds.). (2002). Identification of learning disabilities: Research to practice Mahwah, NJ: Erlbaum. [Google Scholar]
  3. Brown MB, Holcombe DC, Bolen LM, & Thomson WS (2006). Role function and job satisfaction of school psychologists practicing in an expanded role model. Psychological Reports, 98(2), 486–496. [DOI] [PubMed] [Google Scholar]
  4. Burns MK, Peterson-Brown S, Haegele K, Rodriguez M, Schmitt B, … VanDerHeyden AM (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31, 28–42. [DOI] [PubMed] [Google Scholar]
  5. Falotico M (2015). School psychologists’ time allocation: Striving for “lean” school psychology (Doctoral dissertation) Miami University, FL. [Google Scholar]
  6. Filter KJ, Ebsen SA, & Dibos R (2013). School psychology crossroads in America: Discrepancies between actual and preferred discrete practices and barriers to preferred practice. International Journal of Special Education, 28(1), 88–100. [Google Scholar]
  7. Flanagan DP, Fiorello CA, & Ortiz SO (2010). Enhancing practice through application of Cattell– Horn–Carroll theory and research: A “third method” approach to specific learning disability identification. Psychology in the Schools, 47, 739–760. [Google Scholar]
  8. Flanagan D, Ortiz S, & Alfonso VC (Eds.). (2016). Essentials of cross battery assessment (3rd ed.). Hoboken, NJ: Wiley. [Google Scholar]
  9. Fletcher JM, Coulter WA, Reschly DJ, & Vaughn S (2004). Alternative approaches to the definition and identification of learning disabilities: Some questions and answers. Annals of Dyslexia, 54(2), 304–331. [DOI] [PubMed] [Google Scholar]
  10. Fletcher JM, Lyon GR, Fuchs LS, & Barnes MA (2007). Learning disabilities: From identification to intervention New York, NY: Guilford Press. [Google Scholar]
  11. Fletcher JM, & Miciak J (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Archives of Clinical Neuropsychology, 32(1), 2–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fletcher JM, Stuebing KK, Barth AE, Miciak J, Francis DJ, & Denton CA (2014). Agreement and coverage of indicators of response to intervention: A multi-method comparison and simulation. Topics in Language Disorders, 34(1), 74–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Fletcher JM, & Vaughn S (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3(1), 30–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Fletcher-Janzen E, & Reynolds CR (Eds.). (2009). Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention Hoboken, NJ: Wiley. [Google Scholar]
  15. Francis DJ, Fletcher JM, Stuebing KK, Lyon GR, Shaywitz BA, & Shaywitz SE (2005).Psychometric approaches to the identification of learning disabilities: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38, 98–108. [DOI] [PubMed] [Google Scholar]
  16. Fuchs D, & Deshler DD (2007). What we need to know about responsiveness to intervention (and shouldn’t be afraid to ask). Learning Disabilities Research & Practice, 22(2), 129–136. [Google Scholar]
  17. Fuchs D, Mock D, Morgan PL, & Young CL (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18(3), 157–171. [Google Scholar]
  18. Glutting JJ, Watkins MW, & Youngstrom EA (2003). Multifactored and cross-battery ability assessments: Are they worth the effort? In Reynolds CR & Kamphaus RW, (Eds.), Handbook of psychological and educational assessment of children: Intelligence, aptitude, and achievement (pp. 343–374). New York, NY: Guilford Press. [Google Scholar]
  19. Gray L, Bitterman A, & Goldring R (2013). Characteristics of Public School Districts in the United States: Results From the 2011–12 Schools and Staffing Survey (NCES 2013–311) U.S. Department of Education. Washington, DC: National Center for Education Statistics; Retrieved from http://nces.ed.gov/ pubsearch [Google Scholar]
  20. Gresham FM (2002). Responsiveness to intervention: An alternative approach to the identification of learning disabilities. In Bradley R, Danielson L, & Hallahan D, (Eds.), Identification of learning disabilities: Research to practice (pp. 467–519). Mahwah, NJ: Erlbaum. [Google Scholar]
  21. Gresham FM, & Witt JC (1997). Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions. School Psychology Quarterly, 12, 249–267. [Google Scholar]
  22. Hale JB, Alfonso V, Berninger B, Bracken B, Christo C, Clark E, … Yalof J (2010). Critical issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disability Quarterly, 33(3), 223–236. [Google Scholar]
  23. Hale JB, & Fiorello CA (2004). School neuropsychology: A practitioner’s handbook New York, NY: Guilford Press. [Google Scholar]
  24. Hale JB, Wycoff KL, & Fiorello CA (2011). RTI and cognitive hypothesis testing for identification and intervention of specific learning disabilities: The best of both worlds. In Flanagan DP & Alfonso VC, Essentials of specific learning disability identification (pp. 173–202). Hoboken, NJ: Wiley. [Google Scholar]
  25. Hoskyn M, & Swanson HL (2000). Cognitive processing of low achievers and children with reading disabilities: A selective meta-analytic review of the published literature. School Psychology Review, 29, 102–119. [Google Scholar]
  26. Jimerson S, Burns M, & VanDerHeyden A (Eds.). (2007). Handbook of response to intervention: The science and practice of assessment and intervention New York, NY: Springer. [Google Scholar]
  27. Johnson ES (2014). Understanding why a child is struggling to learn: The role of cognitive processing evaluation in learning disability identification. Topics in Language Disorders, 34(1), 59–73. [Google Scholar]
  28. Kavale KA (2002). Discrepancy models in the identification of learning disability. In Bradley R, Danielson L, & Hallahan DP, (Eds.), Identification of learning disabilities: Research to Practice (pp. 369–426). Mahwah, NJ: Erlbaum. [Google Scholar]
  29. Kearns DM, & Fuchs D (2013). Does cognitively focused instruction improve the academic performance of low-achieving students? Exceptional Children, 79, 263–290. [Google Scholar]
  30. Kratochwill TR (2007). Preparing psychologists for evidence-based school practice: Lessons learned and challenges ahead. American Psychologist, 62, 826–843. [DOI] [PubMed] [Google Scholar]
  31. Kratochwill TR, & Shernoff ES (2004). Evidence-based practice: Promoting evidence-based interventions in school psychology. School Psychology Review, 33, 34–48. [Google Scholar]
  32. Leachman M, & Mai C (2014). Most state funding schools less than before the recession Retrieved from https://www.cbpp.org/sites/default/files/atoms/files/10-16-14sfp.pdf [Google Scholar]
  33. Licciardello LH (2002). Average effort extended in person hours to complete an initial child study team evaluation. (Master’s dissertation) Rowan University, Glassboro, NJ. [Google Scholar]
  34. Macmann GM, Barnett DW, Lombard TJ, Belton-Kocher E, & Sharpe MN (1989). On the actuarial classification of children: Fundamental studies of classification agreement. The Journal of Special Education, 23(2), 127–149. [Google Scholar]
  35. MacMillan DL, & Siperstein GN (2002). Learning disabilities as operationally defined by schools In Bradley R, Danielson L, & Hallahan D, (Eds.), Identification of learning disabilities: Research to practice (pp. 287–333). Mahwah, NJ: Erlbaum. [Google Scholar]
  36. Maki KE, Floyd RG, & Roberson T (2015). State learning disability eligibility criteria: A comprehensive review. School Psychology Quarterly, 30, 457–469. [DOI] [PubMed] [Google Scholar]
  37. McGill RJ, & Busse RT (2017). When theory trumps science: a critique of the PSW model for SLD identification. Contemporary School Psychology, 21, 10–18. [Google Scholar]
  38. Miciak J, Taylor WP, Denton CA, & Fletcher JM (2015). The effect of achievement test selection on identification of learning disabilities within a patterns of strengths and weaknesses framework. School Psychology Quarterly, 30, 321–334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Miciak J, Taylor WP, Stuebing KK, & Fletcher JM (2017). Simulation of LD identification accuracy using a pattern of processing strengths and weakness method with multiple measures. Journal of Psychoeducational Assessment, 36(1), 21–33. doi: 10.1177/0734282916683287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Miciak J, Williams JL, Taylor WP, Cirino PT, Fletcher JM, & Vaughn S (2016). Do processing patterns of strengths and weaknesses predict differential treatment response? Journal of Educational Psychology, 108, 898–909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Naglieri JA, & Das JP (1997). Cognitive assessment system Chicago, IL: Riverside Publishing. [Google Scholar]
  42. National Center for Intensive Intervention. (2016). Academic intervention tools chart Retrieved from http://www.intensiveintervention.org/chart/instructional-intervention-tools
  43. Pashler H, McDaniel M, Rohrer D, & Bjork R (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105–119. [DOI] [PubMed] [Google Scholar]
  44. Portland Public Schools. (2013). Guidance for the identification of specific learning disabilities Retrieved from https://www.pps.net/cms/lib8/OR01913224/Centricity/Domain/178/pdfs/PSW_Feb_2013_Guide. pdf
  45. President’s Commission on Excellence in Special Education. (2002). A new era: Revitalizing special education for children and their families Retrieved from https://education.ucf.edu/mirc/Research/President’s%20Commission%20on%20Excellence%20in%20Special%20Education.pdf
  46. Schneider WJ, & Kaufman AS (2017). Let’s not do away with comprehensive cognitive assessments just yet. Archives of Clinical Neuropsychology, 32(1), 8–20. [DOI] [PubMed] [Google Scholar]
  47. Shaywitz S (2003). Overcoming dyslexia: A new and complete science-based program for reading problems at any level New York, NY: Alfred A. Knopf. [Google Scholar]
  48. Simos PG, Rezaie R, Fletcher JM, & Papanicolaou AC (2013). Time-constrained functional connectivity analysis of cortical networks underlying phonological decoding in typically developing school-aged children: A magnetoencephalography study. Brain and Language, 125(2), 156–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Stanovich K (1989). Has the learning disabilities field lost its intelligence? Journal of Learning Disabilities, 22, 487–492. [DOI] [PubMed] [Google Scholar]
  50. Stuebing KK, Barth AE, Molfese PJ, Weiss B, & Fletcher JM (2009). IQ is not strongly related to response to reading instruction: A meta-analytic interpretation. Exceptional Children, 76, 31–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Stuebing KK, Fletcher JM, LeDoux JM, Lyon GR, Shaywitz SE, & Shaywitz BA (2002). Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. American Educational Research Journal, 39, 469–518. [Google Scholar]
  52. Styron MM (2003). A study to determine how many person hours are expended by learning consultants and school psychologists to complete an evaluation from initial referral to final placement. (Master’s dissertation) Rowan University, Glassboro, NJ. [Google Scholar]
  53. Tanaka H, Black J, Hulme C, Leanne S, Kesler S, Whitfield-Gabrieli S, … Hoeft F (2011). The brain basis of the phonological deficit in dyslexia is independent of IQ. Psychological Science, 22(11), 1442–1451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. U.S. News and World Report. (2016). Best social services jobs: School psychologists Retrieved from https://money.usnews.com/careers/best-jobs/school-psychologist
  55. VanDerHeyden AM, Witt JC, & Gilbertson D (2007). A multi-year evaluation of the effects of a Response to Intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256. [Google Scholar]
  56. Vellutino FR, Scanlon DM, & Lyon RG (2000). Differentiating between difficult-to-remediate and readily remediated poor readers: More evidence against the IQ-achievement discrepancy definition of reading disability. Journal of Learning Disabilities, 33, 223–238. [DOI] [PubMed] [Google Scholar]

RESOURCES