Skip to main content
Sage Choice logoLink to Sage Choice
. 2022 Dec 15;56(6):453–466. doi: 10.1177/00222194221141093

Educational Technology in Support of Elementary Students With Reading or Language-Based Disabilities: A Cluster Randomized Control Trial

Lisa B Hurwitz 1,, Kirk P Vanacore 1
PMCID: PMC10631285  PMID: 36519673

Abstract

Experts laud the potential of educational technology (edtech) to promote reading among students with disabilities, but supporting evidence is lacking. This study evaluated the effectiveness of the Lexia® Core5® Reading edtech program (Core5) on the Measures of Academic Progress® (MAP) Growth Reading and easyCBM oral reading fluency performance of students with reading or language-based disabilities in Grades K to 5. Core5 systematically addresses multiple reading domains and previously was effective in general education. We hypothesized treatment students using Core5 would outperform controls on the reading assessments. This was a cluster randomized effectiveness evaluation, with condition assignment by school (three treatment and two business-as-usual control schools). Participating students in Grades K to 5 (N = 115; nTreatment = 65) were flagged by their Chicago-area district as needing reading intervention and had Individualized Education Program (IEP) designations of specific learning disability, speech or language impairment, or developmental delay. Treatment students used Core5 to supplement Tier 1 instruction for an average of 58.76 minutes weekly for 24.58 weeks. Regressions revealed treatment students outperformed controls on MAP (B = 3.85, CI = 0.57–7.13, p = .022, d = .24), but there were no differences for oral reading fluency. MAP findings confirm edtech can effectively supplement reading instruction for this population.

Keywords: reading disabilities, language-based disabilities, reading intervention, educational technology


Nearly, 2.5 million U.S. Grades K to 12 students have specific learning disabilities (SLDs), over 1 million have speech or language impairments (SLIs), and over half a million have developmental delays (DDs) wherein they are delayed in reaching milestones in areas like communication (National Center for Education Statistics, 2021). Primary and secondary students with these disabilities often experience reading difficulties (Catts et al., 2005) and may have comorbid conditions impacting learning, such as difficulty with attention and executive functioning (Centre for Excellence, 2017; de Beer et al., 2014). Students with reading or language-based disabilities who struggle to master foundational literacy skills in elementary school face continued difficulty throughout formal education (de Beer et al., 2014). In 2019, only 12% of students with disabilities, including but not limited to SLD, SLI, and DD, were proficient on the reading portion of the National Assessment of Educational Progress (NAEP) assessment (U.S. Department of Education, 2019). Students with SLD, in turn, are three times less likely to graduate high school (Horowitz et al., 2017). Intervening and providing these students with high-quality reading instruction in elementary school is therefore of utmost importance.

Experts have proposed using educational technology (edtech) to address reading difficulties (Reid et al., 2013) by providing dynamic features and scaffolds contingent on students’ individual needs (Cullen et al., 2014). Yet, the promise of edtech in this regard has, at times, been “more hype than reality” as there is a dearth of quality research showing positive effects of edtech used in real-world educational settings (Kim et al., 2017). A small number of edtech programs claim to employ research-based pedagogical techniques designed for students with disabilities (Dawson et al., 2016), but simply trying to apply such techniques is insufficient to prove effectiveness. The present experimental effectiveness evaluation study was designed to assess the real-world impact of one supplemental edtech program (the Lexia® Core5® Reading blended learning solution) for Grades K to 5 students with reading and language-based disabilities to begin to bridge the gap between popular rhetoric and scholarly understanding.

Effective Intervention for Students With Reading or Language-Based Disabilities

The Simple View of Reading is a theoretical framework that has been instrumental in classifying students with reading disabilities and difficulties (Catts, 2018). It posits that reading is the product of word recognition (decoding) and language comprehension (Gough & Tunmer, 1986; Hoover & Gough, 1990). Students with SLDs, such as dyslexia, often display a phonological deficit inhibiting decoding, as well as related skills like phonemic awareness (the ability to focus on and manipulate individual sounds—phonemes—in spoken words; Vellutino et al., 2004). Students with SLI tend to struggle with the second component in the Simple View of Reading—language comprehension—and often have difficulty learning new vocabulary, attending to grammatical/syntactical cues, and making inferences supporting comprehension (Lervåg et al., 2018). Sometimes language comprehension difficulties co-occur with decoding and phonology difficulties (National Institute on Deafness and Other Communicative Disorders, 2019). In that vein, there are high levels of comorbidity between SLI and dyslexia (Catts et al., 2005). Likewise, students with DD are commonly reclassified with either SLD or SLI by the end of elementary school (Delgado, 2009) and, as such, may experience difficulties with decoding, language comprehension, or both.

A robust body of literature points to effective intervention approaches for these populations (e.g., Scammacca et al., 2007; Wanzek et al., 2010). Regardless of causes or classification of reading difficulties, it has been posited that struggling readers benefit from similar instruction (Elliott & Grigorenko, 2015). Reviews suggest elementary students with reading difficulties benefit from multi-component interventions targeting both decoding and comprehension (Scammacca et al., 2007; Wanzek et al., 2010), which is in alignment with the Simple View of Reading. Core5, the supplemental program used in the present study, was designed to provide this kind of multi-component instruction. Specifically, it aligns to the Simple View of Reading with sequential instruction provided in four strands related to decoding (phonological awareness, phonics, structural analysis, and automaticity/fluency) and two related to language comprehension (vocabulary and comprehension; Wilkes et al., 2020). In a statewide evaluation of Core5 use in Grade 3, Baron and team (2019) found poor decoders demonstrated increased decoding as measured by the aimsweb Reading Curriculum-Based Measure (word reading) after a year of Core5 use, while poor comprehenders saw increased aimsweb Maze test (comprehension) performance. The authors interpreted these findings as evidence that the program was successfully targeting students’ areas of greatest need per the Simple View of Reading.

In terms of instructional approaches for supporting students with reading difficulties, Swanson (1999) found that interventions generally were most impactful when educators engaged in “direct and strategy-based instruction,” which he defined as including combinations of the following: modeling and providing reminders to use reading skills; breaking down tasks into small steps and building sequentially; giving repeated and individually tailored feedback; allowing students independent practice opportunities; and/or engaging in small-group instruction. In a more recent meta-analysis, Stockard et al. (2018) also found a significant positive effect for direct instruction and identified the following additional important features: placement experiences to ensure students received appropriately leveled lessons, scripted lessons to ensure teachers’ describe discrete skills clearly and accurately, and regular checks for understanding. Core5 embodies these practices. As described in a review for the National Center on Intensive Intervention [NCII] (2021), Core5 provides explicit instruction in and modeling of reading strategies across target domains, includes a placement test to ensure students are leveled appropriately, provides students with ample opportunities to practice target skills, and gives students immediate corrective feedback if they provide incorrect answers. Core5 further breaks target skills into small steps, and, as students advance within levels, presents increasingly complex practice units, with opportunities to review skills across levels (NCII, 2021). If students struggle with a skill, it prompts teachers to deliver a scripted lesson following a gradual release format with direct instruction, guided practice, and independent application (NCII, 2021)—all in the spirit of the best practices identified by Swanson (1999) and Stockard et al. (2018).

Promise of Edtech

Educational policymakers have expressed optimism that edtech might provide high-quality instruction at scale, supplementing and perhaps extending educator-led instruction (Office of Educational Technology, 2017). Even in special education where student-to-teacher ratios often are low, teachers may struggle with differentiating instruction when student needs are vast and varied (Hurwitz & Vanacore, 2020). Edtech can provide a forum for some students to work independently, freeing teachers to provide targeted instruction to others (Kim et al., 2017; Regan et al., 2014). Indeed, general education research focused on Core5 has documented teachers commonly instructing some students to work independently online while they meet one-on-one or in a small group with struggling students (Wilkes et al., 2020). In addition, some edtech programs, Core5 included, contain teacher-facing dashboards with student online performance data that can inform instruction (Liu et al., 2017). Vanacore et al. (2021) found that nationally, Core5 users demonstrate greater in-program progress the more often their teachers check the teacher-facing dashboard. The authors believe this finding suggests teachers were leveraging Core5’s performance data and recommended companion resources to enrich instruction. Broadly, edtech programs and their accompanying dashboards also can free teachers’ time to engage in behavior management to ensure all students are on-task (Berninger et al., 2015).

As another advantage, edtech can be structured so students work on content appropriately challenging for them (Dawson et al., 2016; Regan et al., 2014). For example, in Core5, students work on content aligned to their ability level, which may or may not be equal to their grade level (NCII, 2021)—a valuable benefit for students who might be reading below level. Edtech also can include multimedia to bolster conceptual understanding (Kim et al., 2017). Dual-coding theory suggests presenting information via audio and visual channels simultaneously can promote strong learning (Paivio, 1986), and some research is beginning to suggest that students with language impairments may particularly benefit from this kind of dual representation (Budhrani et al., 2010). Presenting concepts both auditorily and visually is a hallmark of Core5 (NCII, 2021).

Edtech further has the potential to be highly motivating for students (Kazakoff et al., 2018)—motivation that in turn can promote stronger learning outcomes (Schiefele et al., 2012). In a review, Kazakoff et al. (2018) note that Core5 fosters motivation by allowing students to choose which reading skill they want to practice in a given session, monitor their learning via progress bars, receive in-program scaffolds so they have a successful learning experience, and earn certificates as they complete levels. Kazakoff and associates conclude these features might create a safe learning space for students with reading difficulties. In an empirical study, Macaruso et al. (2019) report general education students who used Core5 found it motivating and imply this may have contributed to their sample’s positive learning outcomes.

State of Existing Edtech Evaluative Research

Despite the optimism, edtech’s impact on students with disabilities has not been rigorously evaluated (Dawson et al., 2016). Most related research has focused on assistive technologies like text-to-speech that may compensate for reading difficulties, rather than instructional programs aimed to promote skill mastery (Liu et al., 2013). In a systematic review, Kim et al. (2017) found only seven studies evaluating edtech programs for students with learning disabilities that met What Works Clearinghouse criteria. The studies had relatively small sample sizes and tended to focus on adolescents rather than elementary-grade students (Kim et al., 2017). Other narrative reviews of studies evaluating effects of edtech on students with disabilities have noted that studies (a) often relied on researcher-created rather than validated measures (MacArthur et al., 2001; Stetter & Hughes, 2010), (b) were narrow in skill area focus (for detailed critiques, see Lowman & Dressler, 2016; Regan et al., 2014), and (c) failed to assess whether effects generalized to skills not taught in the program (a shortcoming noted in Görgen et al., 2020). In addition, interventions usually were short (e.g., less than a school semester), with limited dosage cited as a reason for a lack of pronounced learning gains (Görgen et al., 2020).

Overall, research evaluating the effects of edtech on students with disabilities has yielded mixed results. Some studies have noted positive impacts on measures of phonological awareness, decoding, spelling, syntactic and semantic knowledge, reading fluency, expressive and receptive vocabulary, and reading comprehension (Cullen et al., 2014; Görgen et al., 2020; Higgins & Raskind, 2004; Lowman & Dressler, 2016; MacArthur et al., 2001; Regan et al., 2014). However, these studies have failed to show consistently positive outcomes, with null effects on some measures of phonological awareness, phonics, decoding, spelling, and comprehension (Görgen et al., 2020; Higgins & Raskind, 2004). In a series of studies focused on learning from eBooks, Smeets et al. (2014) found students with severe language impairment needed a greater number of repeated readings than peers without disabilities to realize the same gains. Altogether, existing research suggests edtech can be effective for students with reading and/or language-based disabilities, but longer and more rigorous effectiveness and efficacy research is needed to fully substantiate the value of this instructional approach.

The Current Study

In the present cluster randomized effectiveness evaluation, we aimed to investigate the effectiveness of edtech in supporting elementary school students with reading and/or language-based disabilities (i.e., students with SLD, SLI, and/or DD receiving special education support for reading) in real-world educational settings, specifically focusing on the Lexia Core5 Reading program (Core5). In prior research, Core5 has been shown to be effective for general education students in Grades K to 5 as measured, for example, by the Measures of Academic Progress® (MAP) Growth Reading assessment (e.g., Macaruso et al., 2020) and for young children at risk for reading disabilities as measured by standardized tests of phonics and phonological awareness (O’Callaghan et al., 2016). For students reading at or below the 20th percentile, NCII’s (2021) Core5 review reports Hedge’s g effect sizes ranging from .08 to .91 on “broad reading” assessments, such as MAP, and .47 for oral reading fluency. Embodying research-based best practices, Core5 promotes both decoding and comprehension, incorporates direct and strategy-based instruction, includes features to help teachers be more effective and efficient, provides students with an individualized learning experience in which they begin the program at their ability level and progress at their own pace, includes audio and visual cues, and offers many features that might be motivating. As such, our primary hypothesis was that students in Grades K to 5 with reading and/or language-based disabilities who used Core5 during special education supplemental reading sessions would demonstrate stronger outcomes on standardized literacy assessments than students receiving business-as-usual supplemental instruction. To aid the interpretation of our main findings, we also asked how well the treatment group implemented the supplemental program (secondary research question for characterization purposes).

Method

Study Design and Randomization Procedures

This was a cluster randomized effectiveness evaluation, with random assignment at the school level (i.e., entire schools assigned to a treatment or control group). In effectiveness evaluations, (a) programs are studied in real-world settings, (b) researchers provide minimal support beyond what would typically be available outside of research, (c) significant discretion is given to practitioners in how they carry out program implementation, and (d) concurrent administration of multiple interventions is acceptable (Rossi et al., 2003; Singal et al., 2014).

In this study, the participating school district responded to an advertisement sent to a list of educational leaders across the United States, seeking a partner district that had no prior experience using Core5 and was willing to participate in a randomized study. We chose to partner with this district because it was the largest district that responded to our advertisement and met district-level inclusion criteria. All their primary schools serving Grades K to 3 (two schools) and intermediate schools serving Grades 3 to 5 (three schools) were enrolled. To ensure a similar distribution of grades/school types across conditions, we treated primary and intermediate schools as separate strata, randomly assigning one primary and two intermediate schools to the treatment group (13 special education teachers and 65 students meeting inclusion criteria), and one primary and one intermediate school to the control condition (seven special education teachers and 50 students meeting inclusion criteria). All students in the study received special education support for reading (Inclusion Criterion 1). Included students also all had Individualized Education Program (IEP) designations of SLD, SLI, and/or DD (Inclusion Criterion 2). All special education teachers who provided reading instruction to students who met inclusion criteria also were included in the study. In treatment schools, these special educators oversaw Core5 implementation; in the control schools, they continued providing business-as-usual supplemental reading instruction. General education students and teachers were excluded.

Sample and Setting

District

This study took place in a mid-sized school district located in the Chicago metropolitan area. The district enrolled approximately 5,000 students in Grades K to 8, of whom 17% had IEPs and received special education services. 1 Most students in the district were White (72%). An additional 15% were Hispanic, 6% Asian, 5% Black, and 3% two or more races. English learners made up 14% of the students. The district was relatively affluent, with 14% of students qualifying for free or reduced-price lunch.

The district was well-resourced. There was a one-to-one iPad program for students in Grades 1 and above. Students in Grades 3 and above were allowed to take home iPads for homework. In kindergarten, students had access to shared classroom devices.

Educators

Each school building was staffed with a school psychologist who oversaw special education case management, specialists (e.g., speech–language pathologists), and special education teachers who supported students in core subject areas like reading.

There were 20 special education teachers who delivered supplemental reading instruction in this study. Eleven (six in the treatment group) provided information on their teaching practices and demographics. The remaining teachers did not respond to survey requests. Teachers who responded to the survey were highly experienced. All but one had a master’s degree, and 82% (n = 9) had more than 20 years of teaching experience. All were White females.

Students

This study originally included 116 students in Grades K to 5. One student in the treatment group left the district before the end of the study and was removed from the dataset, reducing the analytical sample to 115. All students met the two inclusion criteria: were receiving special education support for reading and had IEP designations of SLD, SLI, and/or DD.

All students in the sample received either “push-in” (n = 12), “pull-out” (n = 47), or both push-in and pull-out (n = 56) supplemental reading instruction from a special education teacher. Students receiving push-in support participated in general education reading instruction. However, a special education teacher would push in to their classes to give them extra attention while their general education peers engaged in reading-related independent practice activities (e.g., reading an eBook) (M = 183.64 minutes per week, SD = 88.35). In contrast, students receiving pull-out support left their general education classes to receive additional small-group (two–six students) supplemental instruction in a separate space (M = 190.00 minutes per week, SD = 86.28). Treatment students used Core5 during these push-in and pull-out supplemental blocks. Both treatment and control teachers reported spending about the same amount of time providing supplemental instruction each week (p > .05), meaning that treatment students’ use of Core5 did not add to their overall time practicing reading skills.

Because having an IEP designation was an inclusion criterion, the sample was disproportionately weighted toward upper elementary grades—when students are more likely to be formally identified as having disabilities (Horowitz et al., 2017). As a corollary partially aligned with prior literature (e.g., Delgado, 2009; Horowitz et al., 2017), younger students, regardless of condition, were more likely to have IEP designations of DD and less likely to receive both push-in and pull-out instruction (ps < .001). See Table 1 for a breakdown of demographics by condition and Table S1 in the online supplemental materials for a detailed demographic breakdown within grade. Note that there were 26 students with more than one IEP designation (n = 23 SLD and SLI; n = 3 SLD and DD).

Table 1.

Baseline Characteristics of Treatment and Control Groups in the edtech Study.

Characteristic Treatment group Control group Significance
SLD 47 32 χ2(n = 81, 1) = .09, p = .77
SLI 19 19 χ2(n = 38, 1) = .63, p = .43
DD 14 8 χ2(n = 22, 1) = .61, p = .26
Instructional model χ2(N = 115, 2) = .98, p = .61
 Pull-out 24 23
 Push-in 7 5
 Both 34 22
Grade χ2(N = 115, 5) = 6.44, p = .27
 Kindergarten 4 4
 Grade 1 2 4
 Grade 2 4 5
 Grade 3 18 6
 Grade 4 17 18
 Grade 5 20 18
MAP pretest 176.46 (19.53) 173.68 (18.68) t(114) = .77, p = .44
easyCBM pretest 67.75 (37.42) 66.30 (37.02) t(98) = .03, p = .97

Note. Means (SDs) for, and number of, participants for each characteristic across the treatment and control groups. SLD = Specific Learning Disability; SLI = Speech and Language Impairment; DD = Developmental Disability; MAP = Measures of Academic Progress.

Random assignment was successful. There were no significant differences between the treatment and control groups in students’ pretest scores, grade distributions, IEP designations, or intervention models (“push-in” or “pull-out”), as shown in Table 1.

Procedure

The school district administered the pretests, and the authors engaged in random assignment in September. In October, approximately 3 weeks after pretesting concluded, the treatment group started to use Core5 during both push-in and pull-out special education reading sessions as a supplement to Tier 1 instruction. The researchers and district leaders instructed the special education teachers in the treatment group to create time during special educational supplemental blocks for students to use Core5 according to program guidelines (20–80 minutes of online time per week, as well as as-needed paper-based instruction). These teachers also were tasked with monitoring the program’s online reporting dashboard to help gauge student progress. Because this was an effectiveness evaluation, treatment special education teachers were allowed to choose when to incorporate Core5 into instruction and were permitted to continue implementing other programs/interventions at their discretion. To help them implement Core5 with fidelity, treatment teachers—like paid Lexia customers—received training and support from Lexia staff on an ongoing basis from October through the end of the study. Researchers passively monitored usage of the online component of Core5 on a continuous basis throughout the study. The special education teachers in the control group continued delivering supplemental reading instruction without Core5 (business-as-usual). In March, researchers conducted implementation fidelity observations in the treatment schools.

Posttesting in all schools occurred in late April/early May (again administered by the school district). The treatment group continued using Core5 until posttesting commenced and resumed after the close of the study. At that point, the research team ceased monitoring their program implementation. As soon as they completed posttesting, control schools also were granted access to Core5. In late May, teachers in both conditions were asked to complete a survey providing demographic information and reflecting on the school year. The treatment group answered additional questions describing program implementation fidelity.

Curricula

Core5

All treatment classes used Core5 as a supplement to Tier 1 instruction.

Instructional program

Core5 includes online activities, teacher-led lessons, and paper-based activities focused on these literacy skill areas: phonological awareness, phonics, structural analysis, fluency/automaticity, vocabulary, and comprehension. Students begin Core5 by taking an online adaptive auto placement test, which places them into one of 18 levels. 2 The program recommends students spend between 20 and 80 minutes per week working online, depending on their reading ability. Nationally, students use the program for an average of 46 minutes per week (Dieter et al., 2020). Each level contains online activities focused on four or five different literacy skill areas, and each day students can choose which activities to work on (see Lexia Learning, 2019, for a description of each activity in each level). Students need to score at least 90% in each activity in a level to advance. If they struggle excessively in any activity, the program recommends teachers deliver a scripted offline Lexia Lesson® (Lesson) intended to help students work through the challenging content. The number of Lessons teachers deliver varies based upon the difficulty their students have while working in the online program.

After students successfully complete all activities in a level, Core5 generates online and printable Certificates for teachers to deliver to celebrate student successes. Simultaneously, the program generates paper-based Lexia Skill Builder® worksheets (Skill Builders) for teachers to assign to help students generalize their online learning and develop enhanced automaticity. In a typical year, students using Core5 nationally complete about two levels (Dieter et al., 2020). In addition to fundamental paper-based materials (i.e., Lessons, Certificates, and Skill Builders), Core5 offers optional paper-based resources (e.g., Comprehension Lessons, Close Reads, Flashcards, Games, Stickers) teachers can use at any time.

Online teacher dashboard

Teachers can continuously monitor students’ progress in Core5 through an online teacher dashboard. They have access to data on the number of levels students complete, specific skills covered, grade-level equivalents of material students are working on, and how much time students spend in the online program. They also can download and print Lessons, Certificates, and Skill Builders materials (as appropriate). The researchers and district administration had access to these data and to additional charts with school-level summaries.

Implementation support

Treatment schools participated in an Implementation Success Partnership (ISP) to support their use of Core5. ISPs are commercially available training packages offered to paid Lexia customers. Lexia ISP staff led a series of trainings for all treatment special educators, providing an overview of Core5’s online and offline components and educator data. ISP staff also led leadership check-ins with district administrators and the school psychologist at each treatment school to discuss strategies to ensure high Core5 program fidelity. In January, a small number of treatment teachers with strong Core5 online usage were added to leadership check-ins. Because they were directly implementing Core5 with students (unlike administrators or psychologists), it was determined they would better represent teacher concerns and model strong instructional strategies. Previous research has shown schools with ISPs demonstrate stronger Core5 usage and progress (Prescott et al., 2018).

Standard Curricula

Students in both treatment and control schools engaged in the “standard curricula.” Specific programs used in this curricula are described below and presented in more detail in Tables S2 and S3 in the online supplemental materials. Control schools relied exclusively on programs in the standard curricula while treatment schools used the standard curricula plus Core5.

As their Tier 1 solution, all students used an iPad version of Schoolwide’s Reading Fundamentals Program—a “balanced literacy” program with a host of fiction and nonfiction texts. Teachers could select various literacy-themed apps to augment the Schoolwide program. According to survey responses from the special education teachers, most general education teachers used Freckle, an app with comprehension and word study content, and/or Epic Reading eBooks. In smaller numbers, other apps were used, as described in Table S2.

The district did not mandate a special education supplemental curriculum, so that teachers could select interventions/programs for push-in and pull-out supplemental instruction. Special education supplemental instruction primarily focused on building foundational reading skills, with less time dedicated to comprehension practice. Treatment teachers supervised students using Core5 plus at least one other supplemental program/intervention. Similarly, all but one teacher in the control group said students used multiple supplemental programs/interventions. All special education teachers in both conditions used at least one paper-based intervention program by Wilson: Fundations, Just Words, and/or Wilson Reading System. Wilson programs address decoding and word study skills. Fundations and Wilson Reading System also cover oral reading, vocabulary, and comprehension strategies. See Table S3 for details on other programs/interventions used during special education supplemental reading instruction.

Measures

Reading outcomes

To avoid burdening participants with extra testing, we leveraged the district’s already-planned literacy interim assessments for this study. All students completed the MAP Growth Reading assessment at pre- and posttests. Students in Grades 2 to 5 completed easyCBM’s oral reading fluency assessment at both pre- and posttests. This measure was not recommended for or administered in kindergarten and Grade 1. For both assessments, as shown in Table S1, older students earned higher scores at pretest than younger students (ps < .001), consistent with prior research (Anderson et al., 2014; Northwest Evaluation Association, 2011).

MAP

MAP is a computer-adaptive assessment that students typically complete in 45 to 60 minutes (Northwest Evaluation Association, 2011). In kindergarten and Grade 1, MAP measures achievement in (a) Foundational Skills (covering phonics and phonological awareness), (b) Vocabulary Use and Functions, (c) Literature and Informational Text, and (d) Language and Writing. In Grades 2 to 5, MAP measures achievement in (a) Word Meaning and Vocabulary Knowledge, (b) Understanding and Integrating Key Ideas and Details for Literature and Informational Text, and (c) Understanding and Interpreting Craft and Structure for Literature and Informational Text. The multiple skills areas assessed with MAP overlap greatly with the areas addressed in Core5—phonological awareness, phonics, structural analysis, fluency/automaticity, vocabulary, and comprehension. MAP generates a composite RaschunIT (RIT) scale score for each student, which can range from 100 to 350. In norming studies, test–retest reliabilities ranged from .73 to .89, and concurrent validity with elementary-level state reading tests ranged from .58 to .83 (Northwest Evaluation Association, 2011).

Oral reading fluency

Oral reading fluency was assessed with the easyCBM passage reading fluency task (Anderson et al., 2014). This task contains approximately 250-word narrative passages written to grade-level reading standards. At each administration, students read aloud as many words as they can from a single passage within a 60-second time limit. Students receive a point for each correctly read word (including self-corrections). Alternate form and test–retest reliabilities in norming studies ranged from .83 to .97 (Anderson et al., 2014).

Intervention implementation fidelity

Data characterizing the nature of treatment instruction was triangulated across four sources: online system data, training fieldnotes, classroom observations, and teacher surveys.

Online system data

We obtained records of treatment students’ Core5 placement levels, number of levels they completed, number of weeks they used the online program, and amount of time they used the program each week.

Training fieldnotes

One or both authors attended training sessions virtually and were included on emails between Lexia ISP staff and the district. We took fieldnotes to characterize Core5 implementation.

Classroom observations

Both authors visited each treatment school, observing one special education session for nine of the 13 treatment teachers. During each observation, we scored student and teacher behaviors in 5-minute intervals. We noted the number of students wearing headphones, intraclass correlation coefficient (ICC) (3, 1) = 0.93, using the Core5 online program, ICC(3, 1) = 0.98, and if assistive technology (e.g., screen-magnifying tools, special keyboards) was being used along with the Core5 online program, ICC(3, 1) = 1.00. Across each observation interval, we also noted whether teachers encouraged students (e.g., said “Good job!”) (percentage agreement = 74%, K = 0.43), redirected students (i.e., engaged in behavior management to ensure students focused on Core5) (percentage agreement = 89%, K = 0.79), provided literacy content support/instruction (e.g., helping with decoding exercises) (percentage agreement = 89%, K = 0.90), and provided technical assistance to students (e.g., helping students log in to Core5) (percentage agreement = 89%, K = 0.46). Students did not use any of the Core5 paper-based materials during our observations; therefore, we have no observation data on that portion of the program.

Teacher surveys

All treatment and control teachers were asked to provide demographic data (reported earlier), as well as information on their own curricula and curricula used in their students’ general education classes. Teachers also indicated how long they typically worked with their push-in and pull-out students each week.

Treatment teachers were asked whether they used the following Core5 paper-based resources: Lessons, Skill Builders, Certificates, Comprehension Lessons, Close Reads, Flashcards, Games, and Stickers. For the fundamental paper-based materials (Lessons, Skill Builders, and Certificates), we also asked teachers if they used them as prescribed (when flagged to do so in the online educator teacher dashboard) and/or on other occasions.

Analytic Approach

Our analyses consist of two parts: evaluation of (a) reading outcomes and (b) treatment implementation fidelity. First, we present multivariate analyses estimating the effect of Core5 on MAP and oral reading fluency performance. Second, we characterize implementation fidelity using online system, training fieldnotes, classroom observation, and teacher survey data.

For the first set of analyses, we ran regression models predicting each outcome variable: MAP RIT and easyCBM correct words per minute. We originally included random intercepts for schools (i.e., multilevel models), but, after adding covariates, the variance associated with schools was zero. Therefore, we chose to use more parsimonious simple linear regressions. Results are similar for both the multilevel and regression models, with the same variables significant in both models. In the models reported in this article, posttest scores were regressed on a students’ treatment condition (Core5 or control), pretest scores, disability status (SLD, SLI, and DD), intervention model (pull-out only, push-in only, or both), and grade. Categorical variables were dummy coded with the control group, SLD, and students who received both intervention models (pull-out and push-in) as reference categories. Grade was dummy coded with youngest grade as the reference category (kindergarten for the MAP RIT model and Grade 2 for the oral reading fluency model). Pretest scores were mean centered. To calculate effect size (Cohen’s d), we used the model effect of the treatment divided by the posttest standard deviation of the control group. We also present descriptive statistics of percentile ranks to facilitate the interpretation of differences between treatment and control groups.

Results

Reading Outcomes

MAP

As shown in Table 2, our first model indicates that Core5 had a small but positive, statistically significant effect on students’ MAP scores (B1 = 3.85, CI = 0.57–7.13, p = .022) after accounting for variance explained by the covariates (pretest scores, disability status, intervention model, and grade). Cohen’s d was .24. Put differently, students in the treatment group gained on average 8.37 percentile points on MAP (M posttest percentile rank = 29.86, SD = 25.00), compared with an average gain of 2.6 percentile points for the control group (M posttest percentile rank = 22.52, SD = 15.00). Students with higher pretest scores earned significantly higher posttest scores (B2 = 0.83, CI = 0.67–1.00, p < .001). Students who received pull-out support only earned significantly higher posttest scores compared with students who received both push-in and pull-out support (B3 = 3.50, CI = 0.02–6.98, p = .049). No other predictors were significant.

Table 2.

Reading Outcomes Model Outputs.

Predictor MAP easyCBM a Oral Reading Fluency
Estimates (CI) p-values Estimates (CI) p-values
Intercept 188.70
(177.73–199.86)
<.001 30.92
(17.83–44.01)
<.001
Treatment 3.85
(.57–7.13)
.0219 –3.92
(–10.99–3.15)
.274
Pretest .83
(.67–1.00)
<.001 1.02
(.89–1.16)
<.001
Pull-Out Only 3.50
(.02–6.98)
.049 –7.29
(–14.81–0.24)
.058
Push-In Only –.74
(.02–6.98)
.801 6.77
(–5.13–18.68)
.261
SLI –.05
(–6.58–5.09)
.978 7.14
(–.34–14.62)
.061
DD –.18
(–6.14–5.78)
.952 –3.40
(–16.20–9.40)
.599
Grade 1 1.02
(–8.74–10.78)
.836
Grade 2 4.22
(–5.64–14.09)
.398
Grade 3 –6.55
(–16.40–3.29)
.190 –5.44
(–19.70–8.81)
.450
Grade 4 –4.92
(–16.58–6.73)
.404 –8.64
(–23.55–6.28)
.253
Grade 5 –6.92
(–19.22–5.37)
.267 –4.94
(–21.81–11.94)
.563
Observations 115 99
R 2 .76 .85

Note. MAP = Measures of Academic Progress; SLI = Speech and Language Impairment; DD = Developmental Disability.

Oral reading fluency

Differences between treatment and control groups’ correct words per minute on the easyCBM at posttest were not significant (B1 = −3.92, CI = −10.99–3.15, p = .274). Students in the treatment group gained on average 23.23 words per minute (M posttest percentile rank = 17.02, SD = 16.89), and the control group gained 26.07 words per minute (M posttest percentile rank = 18.88, SD = 19.73). Again, students with higher pretest scores had significantly higher posttest scores (B2 = 1.02, CI = 0.89–1.16, p < .001). No other predictors were significant.

Treatment Implementation Fidelity

Online system data

Treatment students used Core5 for an average of 24.58 weeks (SE = 0.72). The mean number of days per week students used the program was 2.79 (SE = 0.09), and the mean number of minutes per day was 21.06 (SE = 0.56). On average, students started the intervention working on Grade 1 content in Level 9 (M = 8.63, SD = 4.14) and concluded working on Grade 2 content in Level 11 (M = 11.31, SD = 3.81). Most students (83.08%) started the intervention working on content below their grade level, and 72.31% started at least two grade levels below their grade. The average number of levels students completed during the intervention was 2.68 (SE = 2.53), which slightly exceeded typical progress made by Core5 users nationally (two levels per year). By the end of the intervention, the percentage of students working on content below their grade level reduced to 67.69%, and the percentage working on content two or more grades below level reduced to 40.00%. See Table S4 in the online supplemental materials for more detailed reporting of students’ usage of and progress in Core5 broken down by grade.

Training fieldnotes

For the most part, the ISP trainings largely focused on the practicalities of using Core5 (e.g., where to locate the paper-based materials in the program); however, we report on two salient episodes below, which provide more insight into the nature of treatment implementation.

During the first few weeks of study, classes were adopting Core5 inconsistently, with some classes creating accounts and showing regular usage patterns more rapidly than others. Lexia ISP training staff recommended that district administrators send each school psychologist weekly summary reports of their students’ usage of the online component of Core5 to help reinforce regular usage habits. This was successful, with usage stabilizing in November and December and remaining fairly constant throughout the remainder of the study.

However, at a winter training, it was clear only some teachers had begun using Core5’s paper-based materials. In response, Lexia staff provided district administrators with a list of tips for incorporating these materials into instruction, which they distributed to treatment teachers.

Classroom observations

On average, 83% of students (SD = 21%) were observed using the online Core5 program during each 5-minute interval. The remaining students worked with their teachers on other literacy programs (e.g., Wilson). While on the Core5 online program, 69% of students (SD = 36%) used headphones. We only observed one child using an assistive device. Teachers provided literacy content assistance to students using the program in 48% of the observation intervals and engaged in behavior management in 41% of the intervals. Less frequently did teachers provide students with words of encouragement (21%) or technical assistance (16%).

No students were observed using Core5’s paper-based materials. This was consistent with teachers’ self-report data during the winter training session. That said, teachers did verbally reference these materials during conversations before and after classroom observations (e.g., described distributing Certificates on Fridays to all students who had “leveled up” earlier in the week). This suggests that use of these assets may have taken place outside of our observations.

Teacher surveys

Teachers’ survey data suggested stronger implementation than observed in classrooms—particularly for the fundamental paper-based materials (i.e., Lessons, Certificates, and Skill Builders). This discrepancy may reflect these materials being used outside observation windows.

All six treatment teachers who completed the survey indicated that they delivered Lessons and gave Certificates as recommended (i.e., when students were flagged as needing a lesson or having earned a certificate). Three also reported administering Lessons when they noticed students were struggling even in the absence of a program recommendation. All self-reported administering Lessons themselves during special education sessions, and two also reported sending them home for homework. Four teachers reported asking students to complete the Skill Builders either during special education sessions (two cases) or for homework (two cases). Three teachers said they assigned Skill Builders when students were flagged as needing them and also at their discretion when they thought these materials were educationally appropriate.

Use of the optional paper-based materials was considerably lower. Comprehension Lessons, Close Reads, and Flashcards were used by one teacher each (three teachers altogether). Games and Stickers were not used by any survey respondents.

Discussion

In light of low rates of achievement among students with disabilities, including but not limited to reading or language-based disabilities (U.S. Department of Education, 2019), this study explored whether a supplemental edtech program that systematically addresses multiple areas of reading through online activities and complementary paper-based materials could effectively promote reading outcomes for elementary-grade students with reading and/or language-based disabilities (SLD, SLI, and/or DD). Our study was in response to calls for more rigorous evaluative research to address optimism around edtech (Kim et al., 2017). Although findings indicate no effect on oral reading fluency, Core5 had a statistically significant and educationally meaningful effect on MAP. The standardized effect in the present study (d = .23) is 64% larger than the average intervention conducted between 2005 and 2011 with students with learning disabilities as measured by standardized reading comprehension assessments (g = .14) (Scammacca et al., 2015). Given that control students used a host of reputable commercial products, this finding is especially noteworthy.

Because this study was conducted using a random design, it is reasonable to attribute gains on MAP to Core5. Students across conditions received comprehension practice in general education classes and skills instruction in special education supplemental sessions, while treatment students received further digital instruction in phonological awareness, phonics, structural analysis, fluency/automaticity, vocabulary, and comprehension through Core5. Aligned with best practices for teaching students with reading difficulties, Core5 provided students with direct and strategy-based instruction (Swanson, 1999) and covered multiple reading domains (Scammacca et al., 2007; Wanzek et al., 2010). Treatment students also could switch among practicing different skills, receive as-needed scaffolding support, and engage with activities presented multimodally—all features previously suggested to support learning and motivation among students with disabilities (Cullen et al., 2014; Kim et al., 2017). Increased motivation may, in turn, have led to stronger reading outcomes (Schiefele et al., 2012). In addition, Core5 generated reports on student performance, allowing teachers to provide targeted offline instruction—another previously noted edtech benefit (Liu et al., 2017). Outcomes from teacher surveys indicate that, in some cases, teachers used Core5’s Lessons with students struggling online. Anecdotally, teachers reported “aha” moments when viewing the teacher dashboard, which made skill gaps highly evident and informed their approach to offering direct support. Altogether, this amalgam of features may have contributed to positive MAP results.

What might explain the null results for oral reading fluency? Evaluations of other interventions used with students with disabilities also report weaker effects for fluency compared with other reading skills, particularly for students in older grades (Alexander & Slinger-Constant, 2004; Torgesen, 2004). Considerable reading practice over years of instruction is needed to achieve grade-level fluency (Torgesen, 2004). It is possible the treatment program used in this study did not provide sufficient fluency practice opportunities to influence this outcome area. Core5 does not include speech recognition technology to support oral fluency practice and does not introduce much connected text before Level 12. Given that the average treatment student concluded the year in Level 11, too few may have reached this content for meaningful full sample gains. Results may have been stronger with a longer or more intensive intervention where students could have advanced further in the program (Alexander & Slinger-Constant, 2004). In addition, participants may not have taken full advantage of the program’s optional offline resources, which include practice in oral reading of connected text. Modifications to the treatment program to encourage more consistent use of those resources (currently listed as optional) might better support fluency development. That said, because fluency is most malleable in early elementary school (Torgesen, 2004), we may have observed more pronounced gains in fluency had data been available for Kindergarten and Grade 1 students in the sample.

Despite the null results for oral reading fluency, it is important to appropriately value the gains made by treatments students on the MAP. Notably, MAP demonstrates strong content and predictive validity with other high-stake reading assessments (Northwest Evaluation Association, 2011), suggesting students ended the year overall better prepared academically.

Study Limitations

This was an effectiveness evaluation situated in a real-world setting. As such, it was less highly controlled than an “efficacy” study, where interventions are tested under ideal circumstances with strictly enforced implementation instructions (Rossi et al., 2003). We initially hoped this study would be closer to the “efficacy” end of the efficacy-effectiveness spectrum. However, it became clear during data collection that the district was accustomed to using a variety of paper- and technology-based literacy resources. Thus, students’ curricular exposure was complex, with treatment students using many programs/interventions in addition to Core5, and control students using a wide range of programs/interventions as well. Future research may wish to screen for use of multiple alternate programs during recruitment or conduct evaluations under more tightly controlled conditions (e.g., with researchers rather than teachers delivering the intervention) to provide a purer estimate of the impact of edtech programs like Core5.

As another shortcoming, we had limited insight into treatment implementation, especially regarding the offline component of the program (i.e., use of the paper-based resources). Only one site visit was conducted during which we did not observe every teacher or observe any use of the paper-based materials. We attempted to complement this by monitoring online system data (available for all treatment students), virtually attending training sessions, and surveying teachers. However, survey response rates were low. Although triangulating across data sources is a study strength, each method individually was flawed. Teachers might have over-represented fidelity in surveys or implemented in an uncharacteristic manner during the observation. One site visit may have been inadequate for a program like Core5, where paper-based materials are not necessarily intended for daily use. It is unfortunate our data are limited because fidelity of treatment implementation is a major contributing factor to program effectiveness (Durlak & DuPre, 2008). Poor survey response rates and lack of observation data in control classes also limit our ability to characterize non-Core5 instruction and rule out alternate explanations about other instructional differences that might explain results. Although our study compares positively with other edtech evaluations in special education—which typically do not collect any program implementation data (as reviewed in the work of Swanson et al., 2013)—there is considerable room for improvement. With greater resources, we recommend conducting more regular in-person implementation fidelity checks.

Directions for Future Research

Future researchers could advance scholarly understanding by attempting to scale many aspects of the present study design. More observation data and a larger sample would have allowed us to address further questions, such as the extent to which outcomes varied across teachers, grade levels, intervention delivery model (push-in vs. pull-out), and other important dimensions. Likewise, it would be beneficial to replicate this study with a larger battery of assessments. MAP simultaneously addresses multiple reading skills, but future research leveraging a variety of tests focused on distinct skills would provide more fine-grained insights.

When considering the external validity of the current study, it is worth noting that this was a fairly affluent and well-resourced community, with a one-to-one iPad program, staff school psychologists overseeing special education programs in every building, and a cadre of highly experienced special education teachers. It is fairly typical of research on students in special education to be fielded in higher socio-economic status (SES) communities, but we join others in calls for work of this type to be conducted in more SES-diverse settings (Alper, 2017).

Finally, we encourage others to extend our work by conducting rigorous experimental research on additional edtech programs for special education students. Many programs market themselves as appropriate for students with disabilities and include features absent from Core5, such as options to customize text appearance or more pervasive audio support (Dawson et al., 2016). Additional research could further elucidate optimal instructional designs and implementation practices in special education settings (Kim et al., 2017).

Conclusion

Students with reading or language-based disabilities experience greater difficulties than peers mastering foundational literacy skills, which in turn can negatively impact their academic achievement. (For reading proficiency data on all students with disabilities, see U.S. Department of Education, 2019). The present findings suggest implementing an edtech program such as Core5, specifically,—or edtech programs more broadly—may bolster the reading outcomes for students with disabilities and thus help mitigate these risks. Nonetheless, to ensure maximum benefits, it is important to identify programs that address multiple skill areas (Görgen et al., 2020) and implement the full suite of program resources over a sufficiently extended period.

Supplemental Material

sj-docx-1-ldx-10.1177_00222194221141093 – Supplemental material for Educational Technology in Support of Elementary Students With Reading or Language-Based Disabilities: A Cluster Randomized Control Trial

Supplemental material, sj-docx-1-ldx-10.1177_00222194221141093 for Educational Technology in Support of Elementary Students With Reading or Language-Based Disabilities: A Cluster Randomized Control Trial by Lisa B. Hurwitz and Kirk P. Vanacore in Journal of Learning Disabilities

1.

Demographic data for individual student participants and schools with levels K to Grade 3 and Grades 3 to 5 might vary from the demographics of the district as a whole.

2.

Since the time when data were collected, Lexia added three additional levels to Core5 focused on skills for readers in Grades 3 to 5: Levels 15, 18, and 21.

Footnotes

Author Note: 1. We thank Pamela E. Hook and Paul Macaruso for feedback on drafts of this manuscript, Amira Aljabar for providing participating educators with implementation support, and the students, educators, and district leaders who participated in this study.

This article evaluates the effectiveness of a commercial product. The authors were employed by Lexia Learning at the time of data collection. Teachers and school personnel carried out the implementation of the educational technology program independently.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: Lisa B. Hurwitz Inline graphic https://orcid.org/0000-0002-3347-2464

Supplemental Material: Supplemental material for this article is available on the Journal of Learning Disabilities website with the online version of this article.

References

  1. Alexander A. W., Slinger-Constant A.-M. (2004). Current status of treatments for dyslexia: Critical review. Journal of Child Neurology, 19(10), 744–758. 10.1177/08830738040190100401 [DOI] [PubMed] [Google Scholar]
  2. Alper M. (2017). Giving voice: Mobile communication, disability, and inequality. MIT Press. [Google Scholar]
  3. Anderson D., Alonzo J., Tindal G., Farley D., Irvin P. S., Lai C.-F., Saven J. L., Wray K. A. (2014). Technical manual: easyCBM. Behavioral Research and Teaching, University of Oregon. [Google Scholar]
  4. Baron L. S., Hogan T. P., Schechter R. L., Hook P. E., Brooke E. C. (2019). Can educational technology effectively differentiate instruction for reader profiles? Reading and Writing, 32(9), 2327–2352. 10.1007/s11145-019-09949-4 [DOI] [Google Scholar]
  5. Berninger V. W., Nagy W., Tanimoto S., Thompson R., Abbott R. D. (2015). Computer instruction in handwriting, spelling, and composing for students with specific learning disabilities in Grades 4–9. Computers & Education, 81, 154–168. 10.1016/j.compedu.2014.10.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Budhrani K. S., Acosta I. P., Espiritu R. M., Ngo C. C., Wong J. L. (2010). Speech and language intervention tool (SPEL-IT): House on phonic hill [Symposium session]. Proceedings of the Graphics and Multimedia Symposium, Kajang, Malaysia. [Google Scholar]
  7. Catts H. W. (2018). The simple view of reading: Advancements and false impressions. Remedial and Special Education, 39(5), 317–323. 10.1177/0741932518767563 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Catts H. W., Adlof S. M., Hogan T. P., Weismer S. E. (2005). Are specific language impairment and dyslexia distinct disorders? Journal of Speech, Language, and Hearing Research, 48(6), 1378–1396. 10.1044/1092-4388(2005/096) [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Centre for Excellence. (2017). Understanding dyslexia. Author. [Google Scholar]
  10. Cullen J. M., Alber-Morgan S. R., Schnell S. T., Wheaton J. E. (2014). Improving reading skills of students with disabilities using Headsprout Comprehension. Remedial and Special Education, 35(6), 356–365. 10.1177/0741932514534075 [DOI] [Google Scholar]
  11. Dawson K., Antonenko P. P., Sahay S., Lombardino L. (2016). How mobile app developers conceive of dyslexia and what it means for mobile app users. Interaction Design and Architectures, 28, 69–84. [Google Scholar]
  12. de Beer J., Engels J., Heerkens Y., van der Klink J. (2014). Factors influencing work participation of adults with developmental dyslexia: A systematic review. BMC Public Health, 14, 77. 10.1186/1471-2458-14-77 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Delgado C. E. F. (2009). Fourth grade outcomes of children with a preschool history of developmental disability. Education and Training in Developmental Disabilities, 44(4), 573–579. http://www.jstor.org/stable/24234264 [Google Scholar]
  14. Dieter K., Studwell J., Vanacore K. P. (2020, July 10–13). Differential responses to personalized learning recommendations revealed by event-related analysis [Conference session]. 13th International Conference on Educational Data Mining (EDM), Ifran, Morocco. [Google Scholar]
  15. Durlak J. A., DuPre E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3), 327–350. 10.1007/s10464-008-9165-0 [DOI] [PubMed] [Google Scholar]
  16. Elliott J., Grigorenko E. L. (2015). The dyslexia debate. Cambridge University Press. [Google Scholar]
  17. Görgen R., Huemer S., Schulte-Körne G., Moll K. (2020). Evaluation of a digital game-based reading training for German children with reading disorder. Computers & Education, 150, 103834. 10.1016/j.compedu.2020.103834 [DOI] [Google Scholar]
  18. Gough P. B., Tunmer W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. 10.1177/074193258600700104 [DOI] [Google Scholar]
  19. Higgins E. L., Raskind M. H. (2004). Speech recognition-based and automaticity programs to help students with severe reading and spelling problems. Annals of Dyslexia, 54(2), 365–389. 10.1007/s11881-004-0017-9 [DOI] [PubMed] [Google Scholar]
  20. Hoover W. A., Gough P. B. (1990). The simple view of reading. Reading and Writing, 2(2), 127–160. 10.1007/BF00401799 [DOI] [Google Scholar]
  21. Horowitz S. H., Rawe J., Whittaker M. C. (2017). The state of learning disabilities: Understanding the 1 in 5. National Center for Learning Disabilities. [Google Scholar]
  22. Hurwitz L. B., Vanacore K. P. (2020, May). “There’s so much we’re doing”: Educational technology in special education settings. In A. R. Lauricella (Chair), Going one-to-one is easier said than done: Understanding technology use in educational settings [Conference session]. International Communications Association Conference, Gold Coast, Australia. [Google Scholar]
  23. Kazakoff E. R., Orkin M., Bundschuh K., Schechter R. (2018). Fostering engagement in educational technologies through developmental theory and program data. In Roscoe R., Craig S., Douglas I. (Eds.), End-user considerations in educational technology design (pp. 99–122). IGI Global. [Google Scholar]
  24. Kim M. K., McKenna J. W., Park Y. (2017). The use of computer-assisted instruction to improve the reading comprehension of students with learning disabilities: An evaluation of the evidence base according to the What Works Clearinghouse standards. Remedial and Special Education, 38(4), 233–245. 10.1177/0741932517693396 [DOI] [Google Scholar]
  25. Lervåg A., Hulme C., Melby-Lervåg M. (2018). Unpicking the developmental relationship between oral language skills and reading comprehension: It’s simple, but complex. Child Development, 89(5), 1821–1838. 10.1111/cdev.12861 [DOI] [PubMed] [Google Scholar]
  26. Lexia Learning. (2019). Detailed scope & sequence. https://www.lexialearningresources.com/core5/licensed/core5_program_references/Core5_Detailed_Scope_Sequence.pdf
  27. Liu G.-Z., Wu N.-W., Chen Y.-W. (2013). Identifying emerging trends for implementing learning technology in special education: A state-of-the-art review of selected articles published in 2008–2012. Research in Developmental Disabilities, 34(10), 3618–3628. 10.1016/j.ridd.2013.07.007 [DOI] [PubMed] [Google Scholar]
  28. Liu M., Kang J., Liu S., Zou W., Hodson J. (2017). Learning analytics as an assessment tool in serious games: A review of literature. In Ma M., Oikonomou A. (Eds.), Serious games and edutainment applications (Vol. 2, pp. 537–563). Springer International Publishing. 10.1007/978-3-319-51645-5_24 [DOI] [Google Scholar]
  29. Lowman J. J., Dressler E. V. (2016). Effects of explicit vocabulary videos delivered through iPods on students with language impairments. Journal of Special Education Technology, 31(4), 195–206. 10.1177/0162643416673914 [DOI] [Google Scholar]
  30. MacArthur C. A., Ferretti R. P., Okolo C. M., Cavalier A. R. (2001). Technology applications for students with literacy problems: A critical review. The Elementary School Journal, 101(3), 273–301. 10.1086/499669 [DOI] [Google Scholar]
  31. Macaruso P., Marshall V., Hurwitz L. B. (2019). Longitudinal blended learning in a low SES elementary school [Conference session]. Proceedings of Global Conference on Learning and Technology (Global Learn 2019), 253–262. https://www.learntechlib.org/primary/p/210313/ [Google Scholar]
  32. Macaruso P., Wilkes S., Prescott J. E. (2020). An investigation of blended learning to support reading instruction in elementary schools. Educational Technology Research and Development, 68(6), 2839–2852. 10.1007/s11423-020-09785-2 [DOI] [Google Scholar]
  33. National Center for Education Statistics. (2021, May). Students with disabilities. https://nces.ed.gov/programs/coe/indicator/cgg
  34. National Center on Intensive Intervention. (2021). Intervention taxonomy brief: Lexia Core5 Reading. https://intensiveintervention.org/sites/default/files/Core5-Brief-2021.pdf
  35. National Institute on Deafness and Other Communicative Disorders. (2019). NIDCD fact sheet: Voice, speech, and language: Specific language impairment. National Institutes of Health. [Google Scholar]
  36. Northwest Evaluation Association. (2011). Technical manual for Measures of Academic Progress® (MAP®) and Measures of Academic Progress for Primary Grades (MPG). Northwest Evaluation Association. [Google Scholar]
  37. O’Callaghan P., McIvor A., McVeigh C., Rushe T. (2016). A randomized controlled trial of an early-intervention, computer-based literacy program to boost phonological skills in 4- to 6-year-old children. British Journal of Educational Psychology, 86(4), 546–558. 10.1111/bjep.12122 [DOI] [PubMed] [Google Scholar]
  38. Office of Educational Technology. (2017). Reimagining the role of technology in education: 2017 national education technology plan update. U.S. Department of Education. https://tech.ed.gov/files/2017/01/NETP17.pdf [Google Scholar]
  39. Paivio A. (1986). Mental representations: A dual coding approach. Oxford University Press. [Google Scholar]
  40. Prescott J. E., Van Voorhis M., Roger T., Schechter R. (2018, June). Improving reading instruction: Advantages of providing tiered, year-long implementation support [Paper presentation]. International Society for Technology in Education Conference, Chicago, IL. [Google Scholar]
  41. Regan K., Berkeley S., Hughes M., Kirby S. (2014). Effects of computer-assisted instruction for struggling elementary readers with disabilities. The Journal of Special Education, 48(2), 106–119. 10.1177/0022466913497261 [DOI] [Google Scholar]
  42. Reid G., Strnadová I., Cumming T. (2013). Expanding horizons for students with dyslexia in the 21st century: Universal design and mobile technology. Journal of Research in Special Educational Needs, 13(3), 175–181. 10.1111/1471-3802.12013 [DOI] [Google Scholar]
  43. Rossi P. H., Lipsey M. W., Freeman H. E. (2003). Evaluation: A systematic approach (7th ed.). SAGE. [Google Scholar]
  44. Scammacca N. K., Roberts G., Vaughn S., Stuebing K. K. (2015). A meta-analysis of interventions for struggling readers in Grades 4–12: 1980–2011. Journal of Learning Disabilities, 48(4), 369–390. 10.1177/0022219413504995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Scammacca N. K., Vaughn S., Roberts G., Wanzek J., Torgesen J. K. (2007). Extensive reading interventions in grades K-3: From research to practice. RMC Research Corporation, Center on Instruction. [Google Scholar]
  46. Schiefele U., Schaffner E., Möller J., Wigfield A. (2012). Dimensions of reading motivation and their relation to reading behavior and competence. Reading Research Quarterly, 47(4), 427–463. 10.1002/RRQ.030 [DOI] [Google Scholar]
  47. Singal A. G., Higgins P. D. R., Waljee A. K. (2014). A primer on effectiveness and efficacy trials. Clinical and Translational Gastroenterology, 5(1), e45–e45. 10.1038/ctg.2013.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Smeets D. J. H., van Dijken M. J., Bus A. G. (2014). Using electronic storybooks to support word learning in children with severe language impairments. Journal of Learning Disabilities, 47(5), 435–449. 10.1177/0022219412467069 [DOI] [PubMed] [Google Scholar]
  49. Stetter M. E., Hughes M. T. (2010). Computer-assisted instruction to enhance the reading comprehension of struggling readers: A review of the literature. Journal of Special Education Technology, 25(4), 1–16. 10.1177/016264341002500401 [DOI] [Google Scholar]
  50. Stockard J., Wood T. W., Coughlin C., Rasplica Khoury C. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research, 88(4), 479–507. 10.3102/0034654317751919 [DOI] [Google Scholar]
  51. Swanson E., Wanzek J., Haring C., Ciullo S., McCulley L. (2013). Intervention fidelity in special and general education research journals. The Journal of Special Education, 47(1), 3–13. 10.1177/0022466911419516 [DOI] [Google Scholar]
  52. Swanson H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes. Guilford Press. [Google Scholar]
  53. Torgesen J. K. (2004). Avoiding the devestating downward spiral: The evidence that early intervention prevents reading failure. American Educator, 28(3), 6–19. https://www.aft.org/periodical/american-educator/fall-2004/avoiding-devastating-downward-spiral [Google Scholar]
  54. U.S. Department of Education. (2019). National Assessment of Educational Progress (NAEP) report card: Reading. https://www.nationsreportcard.gov/reading/nation/achievement/?grade=4
  55. Vanacore K. P., Dieter K., Hurwitz L. B., Studwell J. (2021). Longitudinal clusters of online educator portal access: Connecting educator behavior to student outcomes [Conference session]. Proceedings of the 11th International Learning Analytics and Knowledge Conference (LAK21), 540–545. 10.1145/3448139.3448195 [DOI] [Google Scholar]
  56. Vellutino F. R., Fletcher J. M., Snowling M. J., Scanlon D. M. (2004). Specific reading disability (dyslexia): What have we learned in the past four decades? Journal of Child Psychology and Psychiatry, 45(1), 2–40. 10.1046/j.0021-9630.2003.00305.x [DOI] [PubMed] [Google Scholar]
  57. Wanzek J., Wexler J., Vaughn S., Ciullo S. (2010). Reading interventions for struggling readers in the upper elementary grades: A synthesis of 20 years of research. Reading and Writing, 23(8), 889–912. 10.1007/s11145-009-9179-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Wilkes S., Kazakoff E. R., Prescott J. E., Bundschuh K., Hook P. E., Wolf R., Hurwitz L. B., Macaruso P. (2020). Measuring the impact of a blended learning model on early literacy growth. Journal of Computer Assisted Learning, 36(5), 595–609. 10.1111/jcal.12429 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-docx-1-ldx-10.1177_00222194221141093 – Supplemental material for Educational Technology in Support of Elementary Students With Reading or Language-Based Disabilities: A Cluster Randomized Control Trial

Supplemental material, sj-docx-1-ldx-10.1177_00222194221141093 for Educational Technology in Support of Elementary Students With Reading or Language-Based Disabilities: A Cluster Randomized Control Trial by Lisa B. Hurwitz and Kirk P. Vanacore in Journal of Learning Disabilities


Articles from Journal of Learning Disabilities are provided here courtesy of SAGE Publications

RESOURCES