Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Sep 2.
Published in final edited form as: Assess Eff Interv. 2012 Oct 17;38(2):71–75. doi: 10.1177/1534508412461434

Advanced (Measurement) Applications of Curriculum-Based Measurement in Reading

Yaacov Petscher 1, Kelli Dawn Cummings 1, Gina Biancarosa 1, Hank Fien 1
PMCID: PMC4557774  NIHMSID: NIHMS718581  PMID: 26346551

Abstract

The purpose of this article is to provide a commentary on the current state of several measurement issues pertaining to curriculum-based measures of reading (R-CBM1). We begin by providing an overview of the utility of R-CBM, followed by a presentation of five specific measurements considerations: 1) the reliability of R-CBM oral reading fluency, 2) issues pertaining to form effects, 3) the generalizability of scores from R-CBM, 4) measurement error, and 5) linearity of growth in R-CBM. We then conclude with a presentation of the purpose for this issue and broadly introduce the articles in the special issue. Because oral reading fluency is one of the most common measures of R-CBM, much of the review is focused on this particular type of assessment; however, the issues presented extend to other assessments of R-CBM.


Oral reading fluency (ORF) is one of many curriculum-based measures of reading (RCBM) that serves as an indicator of overall reading achievement (Fuchs, Fuchs, Hosp, & Jenkins, 2001). Research has demonstrated the importance of fluently reading connected text as a theoretical construct (Hudson, Pullen, Lane, & Torgesen, 2009; LaBerge & Samuels, 1974) and scores from fluency measures have been validated as proxies for measuring reading comprehension (Buck & Torgesen, 2003; Ridel, 2007; Roehrig et al., 2008; Shapiro, Solari, & Petscher, 2008). As a result, measures of ORF have widespread utility in their capability as screeners of future reading comprehension difficulties, and as tools to monitor student progress in overall reading skills. Along with a decade of attention to accountability, the recent spread of Response to Intervention (RtI) models as a means of identifying and monitoring students at risk for reading failure has made R-CBM and ORF in particular more popular than ever (Cummings, Atkins, Allison, & Cole, 2008).

Whereas ORF measures used to be idiographic, and truly based in local curricula (e.g., Hintze, Shapiro, Conte, & Basile, 1997; Shinn, Gleason, & Tindal, 1989), a number of standardized measurement systems with common sets of reading passages have since been developed; for example, AIMSweb, DIBELS, easyCBM, and Edcheckup are all widely used RCBM measurement systems that include a measure of ORF. Although the procedures, prompts, and passages can vary among measurement systems, certain practices have become standard across systems. Typical R-CBM scoring conventions include (a) a count of the total words read correctly, (b) mispronunciations, substitutions, omissions, and transpositions are counted as errors, (c) inserted or repeated words are ignored, (d) students are allowed to “self-correct” a previously mispronounced word if they do so within three seconds, and (e) if students hesitate or struggle with a word for three seconds, an examiner provides the word but marks it as an error on the test form (Hosp, Hosp, & Howell, 2007; Stecker et al., 2005). In addition, most systems offer schools benchmark passages for occasional status checks (i.e., screening purposes) and for more frequent checks of response to intervention (i.e., progress monitoring). R-CBM data may be aggregated at the classroom, grade, or school level to provide quarterly indicators of school health that can be used to allocate resources (Cummings, et al., 2008).

Regardless of the system used, R-CBM research across the last 25 years has focused on investigating (a) the technical features of the static score (i.e., baseline or aggregate score(s)), (b) the technical features of the slope (i.e., time series performance), and (c) the instructional utility of the measure (Deno, 2003; Fuchs, 2004). However, the increasingly widespread and novel use of ORF for higher stakes accountability decisions has brought closer scrutiny to an expanded set of technical adequacy and validity information for ORF. Many of these concerns hinge on psychometric issues, such as reliability and error of measurement for estimates of students' status and growth.

Reliability of R-CBM ORF

Generally speaking, reliability estimates for R-CBM ORF measures are uniformly high. For example, when estimated using three or more passages administered concurrently, alternate form reliability for a single passage is usually estimated at .80 and above and for the set of passages is approximately .90 and above (Baker et al., 2008; Biancarosa, Bryk, & Dexter, 2010; Howe & Shinn, 2002; Roberts, Good, & Corcoran, 2005). As a result, and perhaps due to historical reasons (e.g., Marston, 1989) most, though not all, R-CBM systems take the median or mean score from three passages administered concurrently. In a 2011 review of National Center for Response to Intervention (NCRTI) website, the authors identified 11 unique R-CBM screening tools. Of those, six required that two or more reading passages to be administered for screening assessment purposes. For progress-monitoring purposes, five R-CBM tools required the administration of multiple passages per assessment period. Although the administration of multiple passages per assessment period improves the static reliability of the average score, it is not yet known if this practice sufficiently addresses some of the other concerns about ORF validity.

Form Effects and R-CBM ORF

As is evident from the lower alternate form reliability for single passage test administrations compared to multiple passage administrations, reliability suffers due to passage-specific differences that cannot be eliminated entirely through careful text authoring (e.g., using readability formulae) nor the typical ORF measure development process (Ardoin, Suldo, Witt, Aldrich, & McDonald, 2005; Ardoin, Williams, Christ, Klubnik, & Wellborn, 2010; Betts, Pickart, & Heistad, 2009; Christ & Ardoin, 2009). Indeed, recent research has demonstrated that from a progress-monitoring perspective, these passage-specific, or form, effects can have a profound influence on the trajectory of monitored students (Ardoin & Christ, 2009; Francis, Santi, Barr, Fletcher, Varisco et al., 2008; Hintze & Christ, 2004) as well as diagnostic accuracy (Petscher & Kim, 2011). As a result, there are now calls for equating of ORF probe passages. In fact, researchers have begun to use equated ORF scores in their studies (e.g., Barth, Stuebing, Fletcher, Cirino, Francis et al., 2012; Denton, Barth, Fletcher, Wexler, Vaughn et al., 2011), but the trade-offs among various methods of equating and the costs associated with them have only begun to be investigated (Albano & Rodriguez, 2012; Christ & Ardoin, 2009; Francis et al., 2008; Griffiths, VanDerHeyden, Skokut, & Lilles, 2009).

Generalizability and Dependability of R-CBM ORF

Studies examining ORF progress-monitoring measures from a Generalizability (G) theory perspective have demonstrated that although relative generalizability is very good, absolute generalizability is much lower. The advantage of G theory studies is that they account for multiple sources of unreliability (or error) simultaneously and also allow for projection of reliability under different circumstances. In the first such G theory study (Hintze, Owen, Shapiro, & Daly, 2000), researchers accounted for variance due to the measurement materials (i.e., passages based on a long-term versus a short-term goal level of difficulty), the grade level of students (second- through fifth-grade), and assessment occasions (twice per week over either eight or ten weeks) and found that the reliability of ORF for assessing growth for an individual student was .90 when using long-term goal materials (i.e., end of grade level) and .80 when using both instructional level and long-term goal materials. Projecting the reliability when only a single passage source is used for two grade levels, dependability dropped to .82 when using long-term goal materials and .67 when using both instructional level and long-term goal materials (Hintze et al., 2000). To date, the only other G theory study obtained reliability of slope estimates using 20 grade-level passages (range of coefficients =.81 - .97; Poncy, Skinner, & Axtell, 2005). In contrast to Hintze et al., Poncy and colleagues (2005) examined the influence of passages alone on reliability. Together these studies suggest that alternate form reliability may overestimate true reliability because traditional psychometric calculations for reliability (i.e., correlation coefficients) do not take into account the possibility of absolute score differences across forms (e.g., Albano & Rodriguez, 2012).

Error of Measurement for ORF

Related to the issue of reliability, is the consideration of the standard error of measurement (SEM) associated with ORF assessment. SEM provides a transparent metric for evaluating the degree of measurement error associated with a student's particular raw score. Poncy and colleagues (2005) offered the first peer-reviewed publication to address the SEM associated with R-CBM, and these results have since been replicated by other researchers (e.g., Christ & Ardoin, 2006; Christ & Silberglitt, 2007). The SEM associated with a single ORF score has been found to range from five to 15 wcpm, with a median SEM of 10 wcpm (Christ & Silberglitt, 2007). Even when passage effects are controlled through a Generlizability (G) theory approach, the SEM can range from three or four to as many as 12 wcpm (Poncy et al., 2005). Furthermore, a line of research by Christ and colleagues has demonstrated a profound impact of SEM on estimates of slope and, ultimately, decisions about adequate student progress (Christ, 2006; Ardoin & Christ, 2009; Christ, Zopluoglu, Long, & Manghen, 2012). The work of Christ and colleagues reveals that when the SEM is small (i.e., < 6), slope estimates show little error as long as ORF is measured six or more times. However, when the SEM is larger, variation in slope estimates can also be quite large; with unsatisfactory psychometric properties until at least 10 or more ORF measures have accumulated. It is worth noting that both (a) small SEMs (< 6) are rarely observed in practice and (b) that waiting for a satisfactory number of data points to address unreliability in a student data set has the potential to either delay or misalign intervention support with student need. Ensuring that R-CBM is validated for the intended use of important educational decisions requires a renewed focus on methods used to evaluate test administration.

Linearity of R-CBM ORF Growth

Adding to the complexity of measurement modeling, is the problem that change in ORF over time may not follow a linear trajectory nor be the same for readers of all abilities, which most technical manuals and research, including the aforementioned work, assume (see also, Speece & Ritchey, 2005). Research over the past decade suggests that ORF growth may in fact be curvilinear with faster growth observed earlier in the academic year and slower growth later (e.g., Al Otaiba, et al., 2009; Christ, Silberglitt, Yeo, & Cormier, 2010; Crowe, Connor, & Petscher, 2009; Deno, Fuchs, Marston, & Shin, 2001; Logan & Petscher, 2010). In addition, ORF slopes vary depending on the skill level of readers at the beginning of the school year, receipt of special education services, the varying type and quality of instructional support that is available to students across the school year, and a number of other individual characteristics (Puranik, Petscher, Al Otaiba, Catts, & Lonigan, 2008; Silberglitt & Hintze, 2007; Wang, Algozzine, Ma, & Porfeli, 2011). The results of this line of research imply that applying the same standard for expected growth, and a linear standard at that, may be a flawed practice. Considering that research has demonstrated that growth in ORF uniquely explains individual differences in reading comprehension (Kim, Petscher, Schatschneider, & Foorman, 2010), it is critically important that the functional form of growth is correctly specified.

Purpose of the Special Issue

Deno, Mirkin, and Chiang (1982) identified R-CBM as the critical CBM task and a robust indicator of overall reading competence, and four decades of research have demonstrated R-CBM's value for providing reliable and valid screening and monitoring information for first grade and beyond (e.g., Wayman, Wallace, Wiley, Tichá, & Espin; 2007). However, research on R-CBM for new purposes is not yet complete. The purpose of this special issue is to bring together a set of articles that highlight at least some of the novel measurement issues facing RCBM researchers in the 21st century. Issues of form equivalency, methodologies for estimating linear and non-linear change over time, and using R-CBM tools as indicators of teacher effects are the three key areas on which we will focus.

In the first article, Stoolmiller, Biancarosa, and Fien explore passage equivalence from a robust and novel measurement perspective. The authors apply a set of linear and non-linear factor analytic models to scores from oral reading fluency passages, and discuss the appropriate methods for equating given the type of measurement model applied. Cummings, Park, and Bauer follow this paper by examining the persistence of form effects in the DIBELS Next Oral Reading Fluency measures in first through sixth grades; evaluating the value of different equating strategies to control for the differences in average fluency rates across the passages. The third article in this series by Christ, Monaghen, Zopluoglu, and Van Norman evaluate growth and diagnostic accuracy of R-CBM scores using pre-post assessment data ranging from six week to 20 week intervals. The fourth article, by Kamata, Nese, Patarapichayathan, and Lai, use scores from passage reading fluency to demonstrate how piecewise growth models and piecewise growth mixture models may be applied to data where only three time points exist per individual. The fifth article, by Hart, Taylor, and Schatschneider, presents a unique methodology using ORF scores to demonstrate how classrooms may causally affect later performance in fluency tasks. Though this design has traditionally been used in medical research, this paper represents the first application in education to account for school effects above the student level.

Conclusion

The articles represented here begin to delve more deeply into recent calls to comprehensively explore issues of the reliability and validity of fluency scores. Because RCBM, especially oral reading fluency, will continue to be used by practitioners and researchers, it behooves us to continually study methodologies and practices for improving the psychometric quality of the scores from such assessments. Advances in educational measurement have been occurring at a rapid pace, yet even some relatively older innovations developed in the last ten to fifteen years [e.g., item response models with time parameters (Verhelst, Verstralen, & Jansen, 1997), latent change scores models (McArdle, 2001)], which hold some great promise for understanding individual differences, have seen little application in the field of assessment research. As accessible software packages and programs are developed for educational researchers, the feasibility of implementing such models is greatly enhanced. The potential promise of these advances in measurement is the ability to estimate more reliable student scores and remove multiple sources of bias. Subsequently, this augments the capacity to use R-CBM scores to correctly identify students' strengths and weaknesses, the need for additional services or accommodations, as well as monitoring progress.

Footnotes

1

In a discussion among the authors about whether to represent curriculum-based measures in reading as CBM-R (the more historic representation) or R-CBM (a more modern characterization), we opted to use the RAND function in Excel to decide which acronym was to be used. As shown here, the lot fell to R-CBM; however the acronyms can be used interchangeably.

References

  1. Al Otaiba S, Petscher Y, Pappamihiel NE, Williams RS, Drylund AK, Connor CM. Modeling oral reading fluency development in Latino students: A longitudinal study across second and third grade. Journal of Educational Psychology. 2009;101:315–329. doi: 10.1037/a0014698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Albano AD, Rodriguez MC. Statistical equating with measures of oral reading fluency. Journal of School Psychology. 2012;50:43–59. doi: 10.1016/j.jsp.2011.07.002. [DOI] [PubMed] [Google Scholar]
  3. Ardoin SP, Christ TJ. Curriculum based measurement of oral reading: Estimates of standard error when monitoring progress using alternate passage sets. School Psychology Review. 2009;38:266–283. [Google Scholar]
  4. Ardoin SP, Suldo SM, Witt J, Aldrich S, McDonald E. Accuracy of readability estimates' predictions of CBM performance. School Psychology Quarterly. 2005;20:1–22. [Google Scholar]
  5. Ardoin SP, Williams JC, Christ TJ, Klubnik C, Wellborn C. Examining readability estimates' predictions of students' oral reading rate: Spache, lexile, and forcast. School Psychology Review. 2010;39:277–285. [Google Scholar]
  6. Baker S, Smolkowski K, Katz R, Fien H, Seeley J, Kame'enui EJ, et al. Reading fluency as a predictor of reading proficiency in low-performing high poverty schools. School Psychology Review. 2008;37:18–37. [Google Scholar]
  7. Barth A, Stuebing KK, Fletcher JM, Cirino PT, Francis DJ, Vaughn S. Reliability and validity of the median score when assessing the oral reading fluency of middle grade readers. Reading Psychology. 2012;33:131–161. doi: 10.1080/02702711.2012.631863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Betts J, Pickart M, Heistad D. An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistic and equating alternate forms. Journal of School Psychology. 2009;47:1–17. [Google Scholar]
  9. Biancarosa G, Bryk AS, Dexter E. Assessing the value-added effects of Literacy Collaborative professional development on student learning. Elementary School Journal. 2010;111:7–34. [Google Scholar]
  10. Buck J, Torgesen J. The relationship between performance on a measure of oral reading fluency and performance on the Florida Comprehensive Assessment Test. 2003 [Electronic version]. Retrieved: www.fcrr.org/TechnicalReports/TechnicalReport1.pdf.
  11. Christ TJ. Short term estimates of growth using curriculum-based measurement of oral reading fluency: Estimates of standard error of the slope to construct confidence intervals. School Psychology Review. 2006;35:128–133. [Google Scholar]
  12. Christ TJ, Ardoin SP. Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology. 2009;47:55–75. [Google Scholar]
  13. Christ TJ, Silberglitt B. Curriculum-based measurement of oral reading fluency: The standard error of measurement. School Psychology Review. 2007;36:130–146. [Google Scholar]
  14. Christ TJ, Silberglitt B, Yeo S, Cormier D. Curriculum-based measurement of oral reading: An evaluation of growth rates and seasonal effects among students served in general and special education. School Psychology Review. 2010;39:447–462. [Google Scholar]
  15. Christ TJ, Zopluoglu C, Long JD, Monaghen BD. Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Exceptional Children. 2012;78:356–373. [Google Scholar]
  16. Crowe EC, Connor CM, Petscher Y. Examining the core relations between poverty, reading curriculums, and first through third grade reading achievement. Journal of School Psychology. 2009;47:187–214. doi: 10.1016/j.jsp.2009.02.002. [DOI] [PubMed] [Google Scholar]
  17. Cummings KD, Atkins T, Allison R, Cole C. Response to Intervention. Teaching Exceptional Children. 2008;40:24–31. [Google Scholar]
  18. Deno SL. Developments in curriculum-based measurement. Journal of Special Education. 2003;37:184–192. [Google Scholar]
  19. Deno SL, Fuchs LS, Marston D, Shin J. Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review. 2001;30:507–526. [Google Scholar]
  20. Deno SL, Mirkin PK, Chiang B. Identifying valid measures of reading. Exceptional Children. 1982;49:36–45. [PubMed] [Google Scholar]
  21. Denton CA, Barth AE, Fletcher JM, Wexler J, Vaughn S, Cirino PT, Romain M, Francis DJ. The relations among oral and silent reading fluency and comprehension in middle school: Implications for identification and instruction of students with reading difficulties. Scientific Studies of Reading. 2011;15:109–135. doi: 10.1080/10888431003623546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Francis DJ, Santi KS, Barr C, Fletcher JM, Varisco A, Foorman BR. Form effects on the estimation of students' oral reading fluency using DIBELS. Journal of School Psychology. 2008;46:315–342. doi: 10.1016/j.jsp.2007.06.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Fuchs LS. The past, present, and future of curriculum-based measurement research. School Psychology Review. 2004;33:188–192. [Google Scholar]
  24. Fuchs LS, Fuchs D, Hosp MK, Jenkins JR. Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading. 2001;5:239–256. [Google Scholar]
  25. Griffiths AJ, VanDerHeyden AM, Skokut M, Lilles E. Progress monitoring in oral reading fluency within the context of RTI. School Psychology Quarterly. 2009;24:13–23. [Google Scholar]
  26. Hintze JM, Christ TJ. An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review. 2004;33:204–217. [Google Scholar]
  27. Hintze JM, Owen SV, Shapiro ES, Daly EJ. Generalizability of oral reading fluency measures: Application of G theory to curriculum-based measurement. School Psychology Quarterly. 2000;15:52–68. [Google Scholar]
  28. Hintze JM, Shapiro ES, Conte KL, Basile IM. Oral reading fluency and authentic reading material: Criterion validity of the technical features of CBM survey-level assessment. School Psychology Review. 1997;26:535–553. [Google Scholar]
  29. Howe KB, Shinn MM. Standard reading assessment passages (RAPs) for use in general outcome measurement: A manual for describing development and technical features. Eden Prairie, MN: Edformation; 2002. [Google Scholar]
  30. Hosp M, Hosp J, Howell K. The ABCs of CBM: A practical guide to curriculum-based measurement. New York: The Guilford Press; 2007. [Google Scholar]
  31. Hudson RF, Pullen PC, Lane HB, Torgesen JK. The complex nature of reading fluency: A multidimensional view. Reading and Writing Quarterly. 2009;25:4–32. [Google Scholar]
  32. Kim YS, Petscher Y, Schatschneider C, Foorman BR. Does growth in oral reading fluency matter in reading comprehension achievement? Journal of Educational Psychology. 2010;102:652–667. [Google Scholar]
  33. LaBerge D, Samuels SJ. Toward a theory of automatic information processing in reading. Cognitive Psychology. 1974;62:293–323. [Google Scholar]
  34. Logan JAR, Petscher Y. School profiles of at-risk student concentration: Differential growth in oral reading fluency. Journal of School Psychology. 2010;48:163–186. doi: 10.1016/j.jsp.2009.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Marston DB. A curriculum-based measurement approach to assessing academic performance: What is it and why do it? In: Shinn MR, editor. Curriculum-based measurement: Assessing special children. New York: Guildford Press; 1989. pp. 18–78. [Google Scholar]
  36. McArdle JJ. A latent difference score approach to longitudinal dynamic structural analyses. In: Cudeck R, du Toit S, Sorbom D, editors. Structural Equation Modeling: Present and Future. Lincolnwood, IL: Scientific Software International; 2001. pp. 342–380. [Google Scholar]
  37. Petscher Y, Kim YS. The utility and accuracy of oral reading fluency score types in predicting reading comprehension. Journal of School Psychology. 2011;49:107–129. doi: 10.1016/j.jsp.2010.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Poncy BC, Skinner CH, Axtell PK. An investigation of the reliability and standard error of measurement of words read correctly per minute using curriculum-based measurement. Journal of Psychoeducational Assessment. 2005;23:326–338. [Google Scholar]
  39. Puranik C, Petscher Y, Al Otaiba S, Catts HW, Lonigan CJ. Development of oral reading fluency in children with speech or language impairments: A growth curve analysis. Journal of Learning Disabilities. 2008;41:545–560. doi: 10.1177/0022219408317858. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Ridel BW. The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students. Reading Research Quarterly. 2007;42:546–567. [Google Scholar]
  41. Roberts G, Good R, Corcoran S. Story retell: A fluency-based indicator of reading comprehension. School Psychology Quarterly. 2005;20:304–317. [Google Scholar]
  42. Roehrig AD, Petscher Y, Nettles SM, Hudson RF, Torgesen JK. Not just speed reading: Accuracy of the DIBELS oral reading fluency measure for predicting high-stakes third grade reading comprehension outcomes. Journal of School Psychology. 2008;46:343–366. doi: 10.1016/j.jsp.2007.06.006. [DOI] [PubMed] [Google Scholar]
  43. Shapiro E, Solari E, Petscher Y. Use of an assessment of reading comprehension in addition to oral reading fluency on the state high stakes assessment for students in grades 3 through 5. Journal on Learning and Individual Differences. 2008;18:316–328. doi: 10.1016/j.lindif.2008.03.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Shinn MR, Gleason MM, Tindal G. Varying the difficulty of testing materials: Implications for curriculum-based measurement. Journal of Special Education. 1989;23:223–233. [Google Scholar]
  45. Silberglitt B, Hintze J. How much growth can we expect? A conditional analysis of R-CBM growth rates by level of performance. Exceptional Children. 2007;74:71–84. [Google Scholar]
  46. Speece DL, Ritchey KD. A longitudinal study of the development of oral reading fluency in young children at risk for reading failure. Journal of Learning Disabilities. 2005;38:387–399. doi: 10.1177/00222194050380050201. [DOI] [PubMed] [Google Scholar]
  47. Stecker PM, Fuchs LS, Fuchs D. Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools. 2005;42:795–819. [Google Scholar]
  48. Verhelst ND, Verstralen HHFM, Jansen MGH. A logistic model for time-limit tests. In: van der Linden WJ, Hambleton RK, editors. Handbook of modern item response theory. New York: Springer; 1997. pp. 169–185. [Google Scholar]
  49. Wang C, Algozzine B, Ma W, Porfeli E. Oral reading rates of second-grade students. Journal of Educational Psychology. 2011;103:442–454. [Google Scholar]
  50. Wayman M, Wallace T, Wiley HI, Ticha R, Espin CA. Literature synthesis on curriculum-based measurement in reading. Journal of Special Education. 2007;41:85–120. [Google Scholar]

RESOURCES