Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Sep 1.
Published in final edited form as: Read Writ. 2020 Apr 15;33(7):1809–1838. doi: 10.1007/s11145-020-10047-z

Factors that Influence Reading Acquisition in L2 English for Students in Bangalore, India

Sunaina Shenoy 1, Richard K Wagner 2, Nisha M Rao 3
PMCID: PMC7461702  NIHMSID: NIHMS1590159  PMID: 32884180

Abstract

This study explores the possibility of adapting specific progress-monitoring tools developed in the US for use in English-medium private schools in Bangalore. In the US, many teachers adopt progress-monitoring tools like the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and Curriculum Based Measurement (easyCBM) to keep track of their students’ reading abilities. We report on Phase 1 of a longitudinal study that included three phases of data collection. Participants included 1003 students in Grades 1, 3 and 5, and 50 teachers. Both quantitative and qualitative data were collected. Results indicated that students in low-cost schools struggled on all reading measures throughout elementary school; students in middle-cost schools had below average to average scores on reading measures; and students from high-cost schools had average to above average scores on all measures. Moreover, factors like oral language proficiency in English, socio-economic status, school and curriculum increased in their significance in predicting reading as students progressed through elementary grades. Teacher data suggested that the reading goals and instructional strategies varied considerably across schools. Implications for reading instruction and practice within the Indian context will be discussed.

Keywords: reading assessment, progress-monitoring, socio-economic status, curriculum, Indian context


Around 80% of Indian schools are government schools, but 27% of Indian children are privately educated; in urban centers, more than 50% of children (27 million) attend private schools (Annual Status of Education Report, India, 2016). The private schools typically follow an international, national or state-level standardized curriculum, and the medium of instruction in these schools is usually English (Kurrien, 2005). In contrast, government schools typically follow a state-level curriculum and the medium of instruction is usually in the state language. English is an integral part of the education system in India because it is one of the two official languages of the country, along with Hindi (National Council of Educational Research and Training, 2011). Given the linguistic diversity of the country and numerous languages that are spoken in different states, English is also used as the national mode of communication and the unifying link language (National Council of Educational Research and Training, 2011). It is the language of economics and business, and is viewed as a requirement for economic and social mobility (Ramanathan & Bruning, 2003). According to Ramanathan and Atkinson (1999), a key assumption has been that the countries with native speakers of the language (e.g. Britain, US, Canada and Australia) set English standards for countries where English is used non-natively but extensively and has been given official language status (e.g. India, and parts of Africa). Unfortunately, “English and the privileges associated with it remain inaccessible to those who are from a lower SES in India, with the Indian middle-class assuming a position of power through its access to English” (p.212).

Schools in the country typically follow a three-language formula (Aggarwal, 1991) that is endorsed by the National Curriculum Framework 2005 (Ramachandran et al., 2005). The first language is the medium of instruction; the second is required to be taught at least by Grade 5 and the third by Grade 7 (Ramachandran et al., 2005). Hindi and English have to be introduced as two of these three languages (Saini, 2000), and the third one is typically the state language. However, the arbitrary time frames for when these languages are introduced in school makes it difficult to measure the language acquisition process, and proficiency in each language. Moreover, a child’s home language in most urban centers may or may not be the national or state languages taught in school; it could be any of the 22 major languages or 700 dialects used in India (Census of India, 2001). A typical child in Bangalore is exposed to at least four languages from ages 0–13 years: a home language (L1 or first language), school language 1 (L2 or second language) which is the language of instruction (English, in our sample); school language 2 (L3 or third language) which is the national language, Hindi and school language 3 (L4 or fourth language) which is the state language (Kannada, in our sample).

In a typical seven-hour school day, a student is exposed to six hours of instruction in English, and one hour of instruction in Hindi and/or the state language. Because the exposure to English is more than the other languages, and is pervasive across academic content areas, there are a greater percentage of students who are proficient in speaking, listening, reading and writing in English by the end of high school and consider it their dominant language as adults.

Assessing reading and writing skills in classrooms and assessing proficiency in all these languages is challenging because of varying levels of exposure and rates of acquisition. Although we are cognizant of the multilingual backgrounds of students in our sample, we were interested in assessing second language (L2) English reading skills because it was the language of instruction for our population of students in Bangalore.

The amount of exposure to English teaching in India is dependent on he teacher’s English language proficiency and the students’ exposure to English outside of school (Nag-Arulmani, 2000). Kurrien (2005) identified four types of schools:

  1. English-medium middle-high cost private schools, where teachers are proficient in English, but students have varying levels of exposure to English in their environment; including as a home or first language;

  2. English- medium low-cost private schools, where teachers have limited proficiency in English and students have limited exposure to English in their environment, but parents aspire towards upward mobility via English;

  3. Government-aided regional schools, where teachers use limited English proficiency with their knowledge of other regional languages, with students from a variety of backgrounds

  4. Government regional schools run by district and municipal education authorities, where teachers are the least proficient in English, and students have the least exposure to English in their environments.

There could be wide variations within these schools in terms of learning opportunities, class libraries, culture, management, which results in varying levels of oral-language and reading proficiency of these students (Nag-Arulmani, 2003).

Literacy Acquisition in Indian Schools

According to Gupta (2014), the main approach to reading in India is the Alphabet-Spelling Method, in which children are taught how to identify letters and then spell out words. This approach focuses on sight word recognition and not on letter-sound correspondences. Literacy in English in the Indian education context involves learning the letter names and focuses on visible products, such as copying letters and recitation.

More recently, Dixon et al. (2011) and Gupta (2014) have attempted to introduce Phonics Instruction in Indian schools as a move away from the Alphabet-Spelling Approach. Dixon et al. (2011) found that phonics instruction was a more effective instructional method for students from the slums, who had illiterate parents and no English language support at home. They observed significant differences on all reading and spelling outcome measures when a synthetic phonics program was compared to traditional instructional practices. Gupta (2014) reported similar findings for students from rural schools in India.

Although English holds a high position in English-medium private schools, Ramanathan and Bruning (2003) found that the constructs of listening, speaking, reading and writing were not valued equally in either teaching or testing contexts at all grade levels. This still remains the case in the present context, with students getting more opportunities for listening and written expression in a classroom setting, compared to speaking and reading. Moreover, the testing of English has focused on the skills of reading and writing with the exclusion of listening and speaking. There is a need for these skills (speaking and listening) to be added and tested in addition to reading and writing (Central Board of Secondary Education, 2006) requiring a change in the examination process (Ramachandran et al., 2005). Furthermore, at the school level, written assessment is the preferred mode of testing, with the test creators’ knowledge that they indirectly test students’ reading skills as well. The written tests are frequently scheduled and focus on content area skills and specific grammar and written skills, such as short answers to questions and essay-writing. They are designed and graded by teachers, with the grades weighed in decision-making regarding promotion to the next grade (Ramanathan & Bruning, 2003). The Kindergarten-Grade 8 syllabi are guided by the skills listed as objectives for Grade 10 and their exams are based on the format used in the Board exams for Grade 10 (Ramanathan, 2008). Board exams are conducted by the board of education that is followed by the English-medium private schools; in our sample, the low-cost schools followed the State Board exams, the middle-cost and high-cost schools followed the National Board exams. Students are expected to write short answers that vary in length from 50–150 words, and the focus is on rote-memorization of responses (Ramanathan, 2008). Only questions discussed in class and to which teachers have provided adequate responses are tested. Students are not explicitly tested on oral language or reading-related skills from grades K-8.

According to Piller & Skillings (2005), teachers in a low-middle cost English-medium private school in New Delhi used nine main strategies to teach English in grades K-5:

  1. Demonstration: where teachers use real objects, perform actions, using gestures and facial expressions to present words, sentence patterns and nursery rhymes;

  2. Choral drill: where the students all chant together following along as the teacher leads for learning the alphabet, sentence patterns, vocabulary lists, nursery rhymes;

  3. Look and say: where students listen to the teacher, look at an object or print, and repeat a word or sentence after the teacher for reading the textbook, words on the blackboard;

  4. Pictorial illustration: use of blackboard drawings, sketches, photographs, maps and textbook illustrations to teach vocabulary words, and reading comprehension;

  5. Verbal illustration: giving a phrase or sentence that shows the typical use of the word in context, and linking new knowledge to existing knowledge;

  6. Association: for presenting vocabulary items like synonyms, antonyms and simple definitions;

  7. Questioning: to lead students to discover patterns, put items into categories, and find labels for categories;

  8. Narration: story-telling for reading comprehension;

  9. Read and Say: where students read a paragraph on the blackboard and responded to a set of written questions.

Though this study was conducted 14 years ago, teachers in our study reported that they followed a subset of six out of the nine methods mentioned above, including demonstration, choral drill, look and say, verbal illustration, association and read and say methods most commonly in their classrooms. They did not however use pictorial illustrations, questioning and narration, which could be interpreted as strategies aimed at higher-level critical thinking skills.

Progress-Monitoring Tools

DIBELSNext

In the US, general outcome measures are used in schools as a basis for measuring early literacy skills, and the Dynamic Indicators of Basic Early Literacy Skills-Next Edition (DIBELSNext, Good, Kaminski & Cummings, 2011) is the most popular screening tool that is used in this regard. It has been validated for use with students from K-6, and can be used to identify students at risk for reading difficulties, help teachers target instructional support and examine the effectiveness of a school’s system of supports for reading acquisition. DIBELS is appropriate for students who are English Language Learners (Haager & Windmueller, 2001) and it is designed to measure progress in children acquiring literacy skills for reading in English. It is however not appropriate for students who are learning to read in a language other than English and they would benefit from being assessed in languages in which they are being instructed.

It includes subtests to measure pre-literate skills in kindergarten and Grade 1, such as phonemic awareness and letter-sound knowledge (i.e. First Sound Fluency, Phoneme Segmentation Fluency, and Nonsense Word Fluency). DIBELSNext also includes subtests that measure reading fluency and reading comprehension, including Oral Reading Fluency (ORF) and Retell Fluency for students in grades 1–6 (aged 6–11 years), and the Daze comprehension subtest which is for students in Grades 3–6 (aged 8–11 years) (Munger et al, 2014). Of particular interest are scores on the Oral Reading Fluency (ORF) subtest, and as noted by Goffreda & DiPerna (2010), these scores are significantly correlated with reading comprehension measures. It is to be noted however that this study was conducted on monolingual learners of English, and the authors state that “additional studies are necessary to address the technical adequacy of DIBELS scores for students from specific racial and ethnic groups, as well as English Language Learners” (Goffreda & DiPerna, 2010, p. 480). The current study offers some insight into the correlation of reading subtests within an Indian context, where students are enrolled in English-medium schools, even if English was not their home language (LI).

Easy Curriculum-Based Measures (easyCBM)

The easyCBM measures were developed and revised by researchers at the University of Oregon (Anderson et al, 2014). The goal has been to support “data-driven instructional decision making through enhanced reporting options” (Anderson, et al., 2014, p.4). They are curriculum-based measures that assess students’ mastery of skills for their grade level. They were originally designed for universal screening and progress monitoring for all students in the classroom, and to be able to record incremental changes in performance through the school year (Deno, 2003; Keller-Margulis et al, 2008). Benchmark assessments are typically administered to all students in fall, winter and spring. These measures include subtests of decoding, fluency and comprehension as students progress through elementary school (average ages 6–10 years). Similar to the DIBELS, the most researched subtest has been the Passage Reading Fluency subtest as it is most often used to monitor student progress within the Response to Intervention (RTI) framework (Nese et al., 2013).

Progress-Monitoring Tools in an Indian Context

For the current study, we utilized progress-monitoring tools to measure reading development in L2 English for the following reasons: (a) Although students in the sample were assessed in written content area skills, they were not being assessed in reading-related skills; (b) while they were bilingual or multilingual in English and other native languages, they were not bi-literate or multi-literate and did not have academic skills in their native language; (c) English served as a link language in the classroom as students came from different home language backgrounds. We were aware that students in Grade 1 in Bangalore, India did not follow a phonics-based curriculum and/or reading programs that are commonly used in US; but were predominantly instructed in varying levels of the Alphabet-Spelling approach (Gupta, 2014) instead. We expected to see wide variations in reading subtest scores across low, middle and high cost schools, but in general, we expected to see lower scores in decoding abilities vs. fluency and comprehension, as students were not directly instructed in the former.

For Grade 1 (average age 6 years), we expected to see below average to average scores on measures of decoding ability such as letter sounds, nonsense word fluency and phoneme segmenting as 5 out of 6 schools in our sample were not directly instructed in the same. Moreover, we were interested in other reading subtests such as the letter names, oral reading fluency, retell fluency, word reading fluency and passage reading fluency, which were central to the reading process, despite the type of reading instruction that was used in these classrooms. For Grades 3 and 5 (average age 8 years and 10 years), we expected students would perform in an average range on measures of reading fluency and comprehension, and were especially interested in observing how these skills developed despite their lack of direct phonics-based instruction. We utilized both the DIBELS Next and EasyCBM to be able to capture the array of subtests that measured varied reading sub-skills that would be common to students in Grade 1 (average age 6 years), 3 (average age 8 years) and 5 (average age 10 years). Both DIBELS Next and EasyCBM were used in the current study to compare the correlations between reading subtests in an Indian context, and record their efficacy as measures of L2 English acquisition within the Indian context.

The content of most subtests were retained and they were administered exactly the same way as they were administered in the US. For the Oral Reading Fluency subtest from DIBELS Next and the Passage Reading Fluency subtest from EasyCBM, culture-free passages were chosen that discussed generic themes such as taking care of a pet dog, trees and plants and going to the market. For example, a passage titled “Parts of a Tree” was chosen instead of “The Cocoa Stand”, as the latter was not relevant to the Indian context. The passages were modified to reflect names that are common within the Indian context (e.g. “Abby” was replaced with “Asha”) and some words were changed to reflect common usage in the culture (e.g. “jump rope” was replaced with “skipping rope”), but the essence of the passages in terms of comprehension was not changed.

Context of Present Study and Research Questions

The setting for this study was low, middle and high-cost private schools in Bangalore India, and they followed an English-medium of instruction. The schools in our sample reflected Kurrien’s (2005) description of (a) English-medium middle-high cost private schools, where teachers are proficient in English, but students have varying levels of exposure to English in their environment, including as a home or first language; (b) English- medium low-cost private schools, where teachers have limited proficiency in English and students have limited exposure to English in their environment, but parents aspire towards upward mobility via English. However, we divided the schools into middle and high cost because there was a significant difference between the socioeconomic backgrounds of the students attending these schools. Since students in our sample came from different home language backgrounds, English soon took on the role of a link language both within the classroom and outside of it. Written assessments were predominantly utilized to measure students’ knowledge of content area skills.

The focus of this study however was to measure English reading-related skills. There were three research questions:

  1. How do students in Grades 1, 3 and 5 perform on reading subtests across low-cost, middle-cost and high-cost schools, compared to US norms?

  2. Do factors such as gender, SES, school and type of curriculum predict reading scores?

  3. What other instructional factors as indicated by teachers predict reading scores?

Method

Participants

Students

The sample consisted of 1003 students from Grades 1, 3 and 5. Students came from different home language backgrounds and were enrolled in English-medium schools even though English was not their home language (L1). They did not receive any additional bilingual support for the development of their home languages and were not expected to be bi-literate in both languages. The demographic information of the students is presented in Table 1. There were 12 home languages represented in our sample of students. The home language (L1) information of the students is presented in Table 2.

Table 1:

Demographic Data for the Students in the Sample

Grade 1 (N=346) Grade 3 (N=328) Grade 5 (N=329)

Individual Characteristics Frequency Percentage Frequency Percentage Frequency Percentage
Gender Male 171 49.42 179 54.57 189 57.45
Female 175 50.58 149 45.43 140 42.55
SES Low-Income 46 13.29 40 12.20 45 13.68
Middle-Income 175 50.58 220 67.07 210 63.83
High-Income 125 36.13 68 20.73 74 22.49
School Characteristics School Type Low-Cost 1 37 10.69 36 10.98 37 11.25
Low-Cost 2 9 2.60 4 1.22 8 2.43
Middle-Cost 1 74 21.39 84 25.61 83 25. 23
Middle Cost 2 101 29.19 136 41.46 127 38.60
High Cost 1 107 30.92 51 15.55 71 21.58
High Cost 2 18 5.20 17 5.18 3 0.91
Curriculum State 46 13.29 40 12.20 45 13.68
National 282 81.50 271 82.62 281 85.41
Montessori 18 5.20 17 5.18 3 0.91
Table 2:

Home Language (L1) Data for Students in the Sample

Names of Languages Grade 1 (N=346) Grade 3 (N=328) Grade 5 (N=329)

Frequency Percentage Frequency Percentage Frequency Percentage
Kannada 75 21.68 102 31.09 84 25.53
Hindi 55 15.89 62 18.90 70 21.27
Telugu 48 13.87 45 13.72 67 20.36
Bengali 36 10.40 26 7.91 0 0
Tamil 32 9.25 22 6.71 45 13.67
Urdu 29 8.38 35 10.67 15 4.56
Malayalam 25 7.23 17 5.18 0 0
Gujarati 25 7.23 0 0 2 0.60
Marathi 21 6.07 0 0 21 6.38
Kodava 0 0 3 0.91 10 3.04
Konkani 0 0 8 2.45 0 0
Tulu 0 0 10 3.05 15 4.55

Teachers

A total of 50 teachers completed the Teacher Questionnaires across all the six school sites. Class teachers of Grades 1, 3 and 5 were selected to participate in the study.

School Setting

Students were recruited from six different school sites, all located in an urban city center, Bangalore. Two of these schools were low-cost schools, two were middle-cost schools and two were high-cost schools. Since all the schools were private schools, the students that attended these schools typically came from low-income, middle-income and high-income socio-economic backgrounds respectively. Both low-cost schools followed a State Board Curriculum that is prescribed by the state of Karnataka, both middle-cost schools followed a National Board Curriculum that is prescribed by the Central Board of Education in India, one high-cost school followed the National Board Curriculum, and the other followed a Montessori Curriculum. The State Board Curriculum is less rigorous than the National Board Curriculum, and the main goal for students graduating from the State Board Curriculum is to find jobs within the state of Karnataka. In comparison, the National Board Curriculum is more rigorous and is followed throughout India, preparing students for national and international jobs. The school characteristics are presented in Table 1.

Measures

We report on several measures that were administered in our study: DIBELSNext, easyCBM, TOSREC, Student Oral Language Observation Matrix (SOLOM) and Teacher Questionnaire. Whereas the reading measures were administered across three time periods during the 2017–18 academic year, the rating scale and teacher questionnaire were only administered once during the second phase of the data collection period.

Because the academic year in India starts in June and ends in March, the reading measures were administered in July-August, October-November and January-February to correspond with benchmark assessments that are administered in Fall, Winter and Spring in the US. For the purposes of this study, we report on Phase 1 data, collected in July-August 2017. The total individual administration time for all the measures was approximately 30 minutes for students in Grade 1 and 15 minutes for students in Grades 3 and 5. Students in Grades 3 and 5 spent an additional 30 minutes on the group-administered Multiple Choice Reading Comprehension measure from the easyCBM.

There is an overlap in the subtests administered from the DIBELS and easyCBM measures because of the following reasons:

  1. We wanted to compare scores to an equivalent reading measure, and since teachers were not measuring students’ reading skills in their classrooms, we used two progress-monitoring tools developed in the US that use similar subtests, so as to document and compare reliability and validity of scores.

  2. Through our administration of the tests as a research team and as we were working on training teachers in classrooms in India to use these tests, we wanted to assess which set of subtests were easier for them to access, administer and score within their context.

DIBELSNext

For Grade 1, the following subtests were administered: Letter Naming Fluency, Phoneme Segmenting Fluency, Nonsense Word Fluency, Oral Reading Fluency and Retell Fluency. All subtests were timed tests and were administered for 1 minute each.

Letter naming fluency

Several research studies (Kaminski & Good, 1996; Scarborough, 1998; Stahl & Murray, 1994; Wagner, Torgesen & Rashotte, 1994) have established a strong relationship between knowledge of letter names and phonological awareness. Letter naming fluency has been predictive of later reading performance (Adams, 1990). This subtest measured the student’s ability to name as many letters as possible in 1 minute, and the student received a score of 1 point for each letter that was correctly identified.

Phoneme segmentation fluency

Phonemic awareness is the knowledge that words are made up of individual sounds or phonemes, and it is highly predictive of reading success (Gillon, 2005; Stahl & Murray, 1994). Phoneme segmentation fluency is a direct measure of phonemic awareness and assesses the student’s ability to break up a word into corresponding sound segments. For example, the word “fin” has three sound segments: /f/ /i/ /n/. The student received 1 point for each different, correct sound produced in 1 minute.

Nonsense word fluency

Knowledge of the alphabetic principle and basic phonics is essential for decoding (Adams, 1990; Ehri, 2002) and reading fluency (Share & Stanovich, 1995). This measure consisted of two parts: correct letter sounds (CLS) and whole words read (WWR). It focused on the student’s knowledge of letter-sound correspondences, and their ability to process CVC combinations that were non-words (e.g. /b/ /o/ /l/). Students received credit for 1 CLS for each correct letter sound read in isolation or read as part of a make-believe word. They also received 1 WWR for each whole word read correctly without first being sounded out.

Oral reading fluency

According to the National Reading Panel, US (2000), reading fluency is dependent on advanced word-attack skills, which refers to decoding skills or the ability to automatically recognize and analyze a printed word and connect it to the spoken word it represents. Oral reading fluency is the link between automatized word decoding and comprehension. Several research studies (Crowder & Wagner, 1992; LaBerge & Samuels, 1974; Perfetti, 1985; Wolf & Katzir-Cohen, 2001) have illuminated the strong relationship between reading fluency and comprehension in monolingual English learners in Western educational settings. For multilingual learners, the role of ORF in reading development is still inconclusive (Dowd & Bartlett, 2019; Quirk & Beem, 2012). The ORF subtest measured the student’s ability to accurately read an unknown passage in one minute, and the student received 1 point for each word that was correctly read.

Retell fluency

Following the ORF subtest, the student is given 1 minute to recall and retell the story that he/she just read as part of the ORF subtest. They received 1 point for every word in their retell that was related to the passage. This subtest provided a quick assessment of a student’s comprehension of text.

For grades 3 and 5, the following subtests were administered: Oral Reading Fluency, Retell Fluency and Daze Comprehension. The ORF passages were grade-appropriate and followed the same administration procedures as described above for Grade 1.

Daze comprehension

Reading comprehension is dependent on a high-level of decoding and fluency (Adams, 1990), access to the syntax and semantics of the language (Catts & Kamhi, 1999) and background knowledge to understand words in context (Duke, Pressley & Hilden, 2004). DIBELS Daze is a cloze comprehension measure, which measures the students’ understanding of the meaning of a word within the context of a sentence. It was individually administered and students were given 3 minutes to complete the test. They were asked to silently read a passage and circle their word choices. According to the authors, approximately every seventh word was replaced by a box containing the correct word and two distractor words. The scores represented the number of correct and incorrect words, and an adjusted score that compensates for guessing is calculated based on the number of correct and incorrect responses.

A reliability analysis was carried out on all the subtests of the DIBELSNext measures for each grade level. For Grade 1, there were a total number of 5 subtests and the measure showed a high level of reliability, α= 0.77. For Grade 3, there were a total number of 3 subtests and the measure showed a moderate-high level of reliability, α= 0.70. For Grade 5, there were a total number of 3 subtests and the measure showed a moderate level of reliability, α= 0.68.

easyCBM

For Grade 1, the following subtests were administered: Letter Names, Letter Sounds, Phoneme Segmenting, Word Reading Fluency and Passage Reading Fluency. All subtests were timed tests and were administered for 1 minute each.

Letter names

This subtest was similar to the Letter Naming Fluency subtest in the DIBELS. Students were given 1 minute to name as many letters as possible and received 1 point for every letter named correctly.

Letter sounds

Students were presented with letters of the alphabet either in uppercase or lowercase format and were asked to produce the letter sounds. It was a timed test for 1 minute and students received 1 point per letter sound that they correctly identified.

Phoneme segmenting

This subtest was similar to the Phoneme Segmenting Fluency subtest in the DIBELS. The student received 1 point for each different, correct sound produced in 1 minute.

Word reading fluency

Students were presented with a list of words and asked to read them. They received 1 point for every word correctly read in 1 minute.

Passage reading fluency

This subtest was similar to the ORF subtest on the DIBELS. The PRF subtest measured the student’s ability to accurately read an unknown passage in one minute, and the student received 1 point for each word that was correctly read.

For Grades 3 and 5, the following subtests were administered: Passage Reading Fluency and Multiple Choice Reading Comprehension. The passages for the Passage Reading fluency subtest was appropriate for Grades 3 and 5, but followed the same administration procedures as Grade 1.

Multiple choice reading comprehension

Students were instructed to silently read a comprehension passage and answer twenty multiple choice comprehension questions that followed. This subtest was group-administered by class sections in the schools, and typically took 30 minutes to complete. Scores were calculated as number of correct responses out of the twenty questions.

A reliability analysis was carried out on all the subtests of the easyCBM measures for each grade level. For Grade 1, there were a total number of 5 subtests and the measure showed a high level of reliability, α= 0.85. For Grade 3, there were a total number of 3 subtests and the measure showed a moderate-high level of reliability, α= 0.72. For Grade 5, there were a total number of 2 subtests and the measure showed a moderate level of reliability, α= 0.69.

Test of Silent Reading Efficiency and Comprehension (TOSREC)

We chose to use the TOSREC (Wagner, Togesen & Rashotte, 2010) for two reasons: (a) The DIBELS and easyCBM progress-monitoring tools had a reading fluency measure, but did not include a comprehension measure for Grade 1 (average age 6 years), and we wanted to measure comprehension across all grades; and (b) It served as a different way of measuring comprehension for Grades 3 (average age 8 years) and 5 (average age 10 years), apart from the sentence level (the DIBELS Daze) and the passage level (Multiple Choice Reading Comprehension from the easyCBM). Students had to read multiple statements and determine if they were true or false. For example, they read a statement such as “A bear can fly” and checked a box labeled “yes” or “no”. They were given 3 minutes to complete the test and raw scores were calculated by subtracting incorrect responses from correct ones. Some statements that used words in American English were changed to Indian English so students would understand them (e.g. The statement “Cookies grow on a hill” was changed to “Biscuits grow on a hill”. But for the most part, the TOSREC served as a culture-free test because it included individual generic statements and not a passage or story that could have cultural references, so it was a preferred test of fluency and comprehension for the Indian context.

A reliability analysis was carried out on the TOSREC for each grade level. The measure was highly reliable across grades resulting in α= 0.85 for Grade 1, α= 0.88 for Grade 3 and α= 0.82 for Grade 5 respectively. Across the grades, most items appeared to be worthy of retention, resulting in a decrease in the alpha if deleted.

Student Oral Language Observation Matrix (SOLOM)

The SOLOM is an informal rating scale that was developed by Collier (2008) for teachers to rate Limited English Proficient (LEP) students in their classrooms. For our study, researchers in the field observed students in classrooms and while they were being individually assessed, and rated them on five areas: Comprehension, Fluency, Vocabulary, Pronunciation and Grammar. This was then converted into a composite score that classified students by language proficiency in English: pre-production, early production, speech emergence, intermediate fluency, advanced fluency. A reliability analysis was carried out on the SOLOM for each grade level. The measure was moderately reliable across grades resulting in α= 0.73 for Grade 1, α= 0.68 for Grade 3 and α= 0.72 for Grade 5 respectively.

Teacher Questionnaire

The questionnaire included four open-ended questions that were targeted at understanding reading practices in schools: (1) What are the reading goals for your grade? (2) How do you teach students to read in Grades K-2? (3) How do you teach students to read in Grades 3–5? (4) By what grade are students in your school expected to read?

Results

Grade 1 Performance on Reading Subtests

Table 3 shows the summary statistics for all the subtests that were administered. Figure 1 displays the reading subtest scores for Grade 1 students in the sample compared to US norms1. This comparison was a purely descriptive one of academic interest to the researchers. A one-sample t-test was conducted to determine if a statistically significant difference existed between subtests of reading from students in Bangalore schools relative to US norms. Quantitative assumptions for the t-test analysis were met. Three subtests, Oral Reading Fluency, Retell Fluency and Passage Reading Fluency only had US norms for Winter and Spring administrations, as indicated by the manual, so these tests were not included in the graphs and while calculating t-test significance. We only included the decoding and fluency measures that had US norms for Fall administration, which corresponded with our first phase of data collection.

Table 3.

Summary Statistics for Subtests on the DIBELSNext and easyCBM Measures

Grade 1 (N=346) Grade 3 (N=328) Grade 5 (N=329)

Mean
(SD)
Min
Max
Mean
(SD)
Min
Max
Mean
(SD)
Min
Max
TOSREC 4.89
(7.06)
0
40
12.51
(7.48)
0
48
11.52
(7.60)
0
36
DIBELS Next LNF 47
(17.78)
3
96
-- -- -- --
PSF 18.11
(16.13)
0
80
-- -- -- --
NWF-letters 15.46
(16.64)
0
143
-- -- -- --
NWF-words 7.27
(9.24)
0
50
-- -- -- --
ORF 13.52
(20.48)
0
192
58.80
(39.54)
0
249
83
(40.55)
0
206
RTF 1.89
(4.47)
0
36
9.84
(14.59)
0
94
17.83
(18.14)
0
90
DAZE -- -- 3.35
(4.53)
0
36
9.25
(8.20)
0
51
EasyCBM LN 48.17
(15.62)
7
90
-- -- -- --
LS 20.64
(14.56)
0
63
-- -- -- --
PS 14.72
(12)
0
51
-- -- -- --
WRF 14.31
(15.53)
0
118
36.18
(23.63)
0
115
-- --
PRF 12.72
(23.35)
0
185
66.05
(42.09)
0
207
104.55
(44.51)
0
241
MCRC -- -- 5.70
(3.26)
0
18
10.64
(3.50)
0
20

Fig 1:

Fig 1:

Comparison of mean scores on progress-monitoring tools between schools in Bangalore, India and US norms for Grade 1

Students from low-cost schools, following the State Board Curriculum performed significantly below US norms on all subtest measures. Students from middle-cost schools, following the National Board Curriculum, performed significantly below US norms on all measures, except word reading fluency, which was similar to US norms (M=12.75, SD=0.76, t (174)=0.58, p=0.56, d=0.15). Students from high-cost schools, following the National Board Curriculum, performed significantly lower than US norms on subtests of letter sounds (M=22.32, SD=15.25, t (106)=3.51, p<0.001, d=0.56), phoneme segmentation (M=14.39, SD=12.23, t (106)=15.72, p<0.00, d=0.27) and nonsense word fluency (M=16.64, SD=19.89, t (106)=8.91, p<0.00, d=0.23) but they performed very similarly to US norms on subtests of letter names (M=46.65, SD=16.08, t (106)=1.21, p=0.22, d=0.30) and word reading fluency (M=15.69, SD=15.84, t (106)=1.62, p=0.12, d=0.15). Students following the Montessori curriculum in the sample performed statistically above US norms on tests of letter sounds (M=37.94, SD=14.93, t (17)=2.96, p<0.001, d=1.3), word reading fluency (M=47, SD=27.85, t(17)=5.15, p<0.001, d=2.54), and letter names (M=61.38, SD=19.89, t (17)=3.54, p<0.001, d=0.90); and similarly to US norms on tests of phoneme segmentation (M=31, SD=12.52, t (17)=0.67, p=0.51, d=1.5) and nonsense word fluency (M=4.16, SD=3.41, t (17)=36.78, p=0.60, d=2.39). All schools in the sample performed significantly below US norms on the TOSREC test, except students from the Montessori school, who performed at par with US norms (M=19.11, SD=11.29, t (17)=1.43, p=0.17, d=2.41). Taken together, these results suggest that when students following the national curriculum were compared to US norms, their performance produced small effect size values. In comparison, students following the Montessori curriculum produced large effect size values, far surpassing US norms.

The correlation matrix for Grade 1 subtests is presented in Table 4. Quantitative assumptions for the correlation analysis were met. This covers all the tests administered, including Oral Reading Fluency (ORF), Passage Reading Fluency (PRF) and the TOSREC. High correlations were recorded for TOSREC and word reading fluency (WRF), passage reading fluency (PRF), nonsense word fluency (NWF) and oral reading fluency (ORF) subtests. For the word-level reading subtests, high correlations were recorded between the DIBELS and easyCBM versions of the same test (e.g. Letter Names and Letter Naming Fluency; Phoneme Segmentation and Phoneme Segmentation Fluency). And finally, for the word reading subtest, high correlations were established with passage reading fluency (PRF), nonsense word fluency (NWF) and oral reading fluency (ORF) subtests.

Table 4.

Correlations between Reading Subtests for Grade 1

TOSREC LN LS PS WRF PRF LNF PSF NWF-letter sounds NWF-words ORF RTF
TOSREC 1
LN 0.52 1
LS 0.54 0.56 1
PS 0.41 0.34 0.69 1
WRF 0.86* 0.60 0.62 0.46 1
PRF 0.85* 0.50 0.52 0.38 0.91* 1
LNF 0.52 0.82* 0.53 0.38 0.62 0.50 1
PSF 0.45 0.32 0.68 0.87* 0.50 0.41 0.36 1
NWF-letters 0.15 0.23 0.52 0.47 0.23 0.11 0.26 0.47 1
NWF-words 0.75* 0.52 0.68 0.58 0.79* 0.76* 0.53 0.59 0.31 1
ORF 0.85* 0.52 0.54 0.41 0.93* 0.97* 0.53 0.44 0.12 0.77* 1
RTF 0.58 0.30 0.44 0.31 0.56 0.61 0.33 0.29 -0.01 0.64 0.62 1

Key: Test of Silent Reading Efficiency and Comprehension (TOSREC); Letter Names (LN); Letter Sounds (LS); Phoneme Segmentation (PS); Word Reading Fluency (WRF); Passage Reading Fluency (PRF); Letter Naming Fluency (LNF); Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency-Letter Sounds (NWF-CLS); Nonsense Word Fluency-Words Read Correctly (NWF-WRC); Oral Reading Fluency (ORF); Retell Fluency (RTF)

*

p<0.05

Grade 3 Performance on Reading Subtests

Table 3 shows the summary statistics for all the subtests that were administered. Figure 2 displays the reading subtest scores for Grade 3 students in the sample compared to US norms. A one-sample t-test was conducted to determine if a statistically significant difference existed between the performance on subtests of reading from students in Bangalore schools and US norms. Quantitative assumptions for the t-test analysis were met. Students from low-cost schools, following the State Board Curriculum, performed significantly below US norms on all subtest measures. Students from middle-cost schools, following the National Board Curriculum, performed significantly below US norms on all measures. Students from high-cost schools, following the National Board Curriculum, performed significantly lower than US norms on the multiple choice reading comprehension, retell fluency and Daze comprehension measures, but significantly higher on the passage reading fluency subtest (M=88.33, SD=34.01, t(50)=2.87, p<0.001, d=0.69) and had similar scores on the oral reading fluency measure (M=76.72, SD=36.03) when compared to US norms, t(50)=0.62, p=0.53, d=0.57. Students from high-cost schools, following the Montessori curriculum performed statistically above US norms on tests of passage reading fluency (M=125.29, SD=54.92, t(16)=3.80, p<0.001, d=1.56), oral reading fluency (M=118.82, SD=57.33, t (16)=2.79, p=0.01, d=1.70), and similarly to US norms on tests of multiple choice reading comprehension (M=10.71, SD=4.21, t (16)=1.67, p=0.11, d=1.72), retell fluency (M=40.88, SD=33.37, t (16)=1.51, p=0.15, d=1.71) and Daze comprehension (M=11.11, SD=10.28, t (16)=0.59, p=0.56, d=1.96). For the TOSREC test, while students from low-cost state schools and middle-cost National schools performed significantly below US norms, students from high-cost National and Montessori schools performed significantly above US norms. Taken together these results suggest that while students following the national board curriculum recorded medium effect size values when compared to US norms, those following a Montessori curriculum recorded large effect size values, far surpassing US norms.

Fig 2:

Fig 2:

Comparison of mean scores on progress-monitoring tools between schools in Bangalore, India and US norms for Grade 3

The correlation matrix for Grade 3 subtests is presented in Table 5. Quantitative assumptions for the correlation analysis were met. The TOSREC was highly correlated with the word reading fluency (WRF), passage reading fluency (PRF) and oral reading fluency (ORF) tests. High correlations were recorded between the DIBELS and easyCBM versions of the same test (i.e. Passage Reading Fluency and Oral Reading Fluency).

Table 5.

Correlations between Reading Subtests for Grade 3

TOSREC PRF MCRC ORF RTF DAZE
TOSREC 1
PRF 0.82* 1
MCRC 0.44 0.47 1
ORF 0.82* 0.96* 0.48 1
RTF 0.49 0.52 0.46 0.54 1
DAZE 0.66 0.67 0.44 0.71* 0.52 1

Key: Test of Silent Reading Efficiency and Comprehension (TOSREC); Passage Reading Comprehension (PRF); Multiple-Choice Reading Comprehension (MCRC); Oral Reading Fluency (ORF); Retell Fluency (RTF); DAZE Comprehension (DAZE)

*

p<0.05

Grade 5 Performance on Reading Subtests

Table 3 shows the summary statistics for all the subtests that were administered. Figure 3 displays the reading subtest scores for Grade 5 students in the sample compared to US norms. A one-sample t-test was conducted to determine if a statistically significant difference existed between the performance on subtests of reading from students in Bangalore schools and US norms. Quantitative assumptions for the t-test analysis were met. Students from low-cost schools, following the State Board Curriculum, and students from middle-cost schools, following the National Board Curriculum, performed significantly below US norms on all subtest measures. Students from high-cost schools, following the National Board Curriculum, showed no significant difference in performance compared to US norms, except on the Daze comprehension measure (M= 13.55, SD = 6.47), on which they performed significantly lower than US norms (t (70) = 2.96, p=0.01, d=0.10). Finally, students from high-cost schools, following the Montessori curriculum, performed similarly to US norms on all subtests measured. For the TOSREC test, while students from low-cost state schools and middle-cost National schools performed significantly below US norms, students from high-cost National and Montessori schools performed significantly above US norms.

Fig 3:

Fig 3:

Comparison of mean scores on progress-monitoring tools between schools in Bangalore, India and US norms for Grade 5

The correlation matrix for Grade 5 subtests is presented in Table 6. Quantitative assumptions for the correlation analysis were met. The TOSREC was highly correlated with the passage reading fluency and oral reading fluency tests. High correlations were recorded between the DIBELS and easyCBM versions of the same test (i.e. Passage Reading Fluency and Oral Reading Fluency).

Table 6.

Correlations between Reading Subtests for Grade 5

TOSREC PRF MCRC ORF RTF DAZE
TOSREC 1
PRF 0.73* 1
MCRC 0.53 0.55 1
ORF 0.74* 0.94* 0.54 1
RTF
DAZE
0.55
0.50
0.53
0.53
0.41
0.42
0.55
0.55
1
0.40

1

Note: Key for Table: Test of Silent Reading Efficiency and Comprehension (TOSREC); Passage Reading Comprehension (PRF); Multiple-Choice Reading Comprehension (MCRC); Oral Reading Fluency (ORF); Retell Fluency (RTF); DAZE Comprehension (DAZE)

*

p<0.01

Factors Affecting L2 English Reading Skills

Categorical Variables

Four categorical variables, namely gender, socio-economic status, school and curriculum were regressed with reading subtest scores to determine if any of them predicted students’ performance. A multiple linear regression was performed with the following factors: male, mid-income, high-income, middle cost, low cost, national and Montessori. Quantitative assumptions for the regression analysis were met. Gender was not a significant predictor. The other three variables were significant predictors and accounted for increasing levels of variance as students progressed through Grades 1, 3 and 5, with curriculum accounting for the most variance.

Socio-economic status (SES) was a significant predictor in Grades 1 (R2= .19, F (12, 333) = 6.80, p<.01) ; 3 (R2= .37, F (6, 321) = 31.28, p<.01) and 5 (R2= .38, F (6, 322) = 33.80, p<.01), with students from high income backgrounds performing significantly better than students from low and middle-income backgrounds.

School was a significant predictor in Grades 1 (R2= .27, F (12, 333) = 10.51, p<.01) ; 3 (R2= .31, F (6, 321) = 23.73, p<.01) and 5 (R2= .34, F (6, 322) = 27.87, p<.01), with students from high-cost schools performing significantly better than students from low and middle-cost schools.

Curriculum was a significant predictor in Grades 1 (R2= .35, F (12, 333) = 15.13, p<.01) ; 3 (R2= .39, F (6, 321) = 33.95, p<.01) and 5 (R2= .38, F (6, 322) = 33.49, p<.01), with students from the Montessori school performing significantly better than students from schools that followed national and state curricula. It is important to note that the efficacy of the curriculum could be attributed to the more favorable student-teacher ratio in Montessori schools compared to the other schools in the sample.

Continuous Variables

Language proficiency scores obtained from the Student Oral Language Observation Matrix (SOLOM) were regressed with reading subtest scores to determine if it was a predictor of student performance. Quantitative assumptions for the regression analysis were met. It was a significant predictor in Grade 1 (R2= .12, F (12, 333) = 3.89, p<.01), 3 (R2= .29, F (6, 321) = 18.90, p<.01) and 5 (R2= .20, F (6, 322) = 13.75, p<.01, with students who were more proficient in English language skills performing better than students who were not. In Grade 1, the most significant differences were observed for subtests of nonsense word fluency and phoneme segmentation; for Grade 3 and 5, it was for subtests of passage reading fluency and multiple choice reading comprehension.

Reading Instruction in Schools

A total of 50 teachers completed teacher questionnaires across the six school sites. There were 20 respondents from the low-cost schools, where they followed state-board curriculums, the teacher-student ratios were on an average 50 students to 1 teacher, and all teachers earned 1-year teaching diplomas as a minimum requirement to teach in these schools. There were 17 respondents from middle-cost schools, where they followed National-board curriculums; the teacher-student ratios were on an average 35 students to 1 teacher, and all teachers had a B.Ed. degree as a minimum requirement to teach in these schools. There were 10 respondents from the high-cost school, where they followed a National-board curriculum, the teacher-student ratios were on average 25 students to one teacher and all teachers had a B. Ed. degree as a minimum requirement to teach in these schools. Finally, there were 3 respondents from the high-cost Montessori school, where the teacher-student ratio was 10 students to one teacher and all teachers were trained in the Montessori system of education. Table 7 illustrates a summary of the main ideas shared by teachers as they responded to the four questions on the questionnaire: (1) What are the reading goals for your grade? (2) How do you teach students to read in Grades K-2? (3) How do you teach students to read in Grades 3–5? (4) By what grade are students in your school expected to read?

Table 7.

Teachers’ Responses on Reading Instructional Practices in Their Schools

Low-Cost Schools/State Curriculum Middle Cost Schools/National Curriculum High Cost School/National Curriculum High Cost School/Montessori
Reading Goals No goals.
Every teacher had a different response, mostly in relation to the mandatory nature of English instruction.
Well-defined goals:
-Grade 1: listening comprehension, print awareness
-Grade 3: Decoding skills
-Grade 5: comprehension and reading mastery
Listening and reading skills before moving on to writing Child progresses at their own pace, but expected to read short paragraphs by Grade 1, and understand words in context by Grade 3.
Teaching Reading in Grades K-2 -Pronunciation
-Repetition
-Read Aloud
-Copying from the board
-Story-telling
-High-frequency words
-Sight word vocabulary
-Practice drills
-Listening and speaking skills
-Pre-reading activities
-Print awareness
-Sight words
-Modeling
-Listening
-Story-telling
-Flashcards
-Blending and segmenting
-Phonics-based approach
-Read stories
-Letter-sound correspondences
-Orthographic mapping
-Phonemic awareness
Teaching Reading in Grades 3–5 -Read Aloud
-Reading passages and explaining the meanings of words
-Picture prompts
-Spelling
-Practice drills
-Decoding Strategies
-Comprehension skills
-Modeling
-Fluency and automaticity
-High frequency words
-Checking for comprehension
-Reading and understanding words in context
-Checking for comprehension
-Reading and understanding words in context
By what grades are students expected to read? Grades 5–8 Grades 3–5 Grades 2–3 Grades K-1

One of the teachers from the low-cost school wrote the following, when asked about reading goals: “ Now a day English is compulsory. Because now competitions is so tough. So students must perfect in English as well as reading also.” Another teacher responded, “Make them to read at list one paragraphs every day.” In contrast, teachers from the middle-cost schools had explicit goals for each grade and one of the teachers responded: “In grade 1, students are expected to develop print awareness, identify high-frequency words in context and develop a love for reading; in grade 3, the goal is decoding and modeling reading practices, and by grade 5, we expect students to comprehend text and attain reading mastery.” In the Montessori school, one teacher’s response was the following: “When the child moves from pre-primary (K) to elementary (Grade 1), he/she is expected to read sentences. But the child is allowed to learn at their own pace and we do not set a bar for each grade.” It is interesting to note that students in all grades were expected to perform at their own pace and were unfamiliar with our models of testing, but this appears to not have influenced their performance.

Discussion

Overall Student Performance on Reading Subtests

In general, students in our sample struggled with decoding skills in Grade 1. This was an expected result, considering that they are not instructed in phonics-based approaches, but are typically instructed in an alphabet-spelling method (Gupta, 2014) that bypasses letter sound correspondences and focuses on letter names and sight word vocabulary. Interestingly, students following the National Board Curriculum in both middle-cost and high-cost schools did not significantly differ from US norms on subtests of letter names and word reading fluency. This indicated that irrespective of the instructional approach, students were reading words fairly accurately.

In Grade 3, students in both low and middle-cost schools performed significantly below US norms on all measures. This was not surprising because according to teachers, students from low-cost schools were expected to read between Grades 5–8 and students from middle-cost schools were only expected to read fluently and comprehend efficiently by Grade 5. In comparison, students from high-cost schools, who followed the National Board Curriculum, performed similar to US norms on subtests of oral reading fluency but significantly below US norms on comprehension measures. One explanation for this result might be because students were not familiar with the comprehension tasks of cloze reading (Daze comprehension) and multiple choice response formats. Most school-based comprehension passages administered to students in an Indian context are followed by open-ended questions that rely on the students’ ability to look for answers in the text and write them down. Comprehension is a product of the task used to measure it.

In Grade 5, students in both low and middle-cost schools performed significantly below US norms on all measures. Students in high-cost schools, following the National Board curriculum, showed no significant difference in performance compared to US norms, except on the Daze comprehension measure, which again could be attributed to the nature of the task.

Student Performance in the Montessori School

One limitation of the study was that we had a very small sample of students from the Montessori school. However, this is representative of the number of children who remain in the Montessori system after kindergarten. A large number of children in India attend Montessori schools in preschool, but move to mainstream schools, following the state or National curriculum by Kindergarten or Grade 1. It is a relatively new education system in India, and parents are still skeptical about their children continuing in Montessori programs, because it becomes increasingly difficult for students to get into regular schools in later grades, owing to the population and fierce competition at these schools. They have to make a switch at some point because the state and National Board Curriculums follow standardized testing in Grades 10 and 12 that is not followed in Montessori schools, and students need to produce these standardized scores to get admission into colleges. Parents therefore opt for making the switch earlier rather than later, so their children follow one curriculum throughout their schooling and are more prepared to take these standardized board exams.

Students from the Montessori school either surpassed or were at par with US norms on all reading subtest measures across Grades 1, 3 and 5. The important finding from this study was that being in a Montessori school accounted for 35–39% of variance across grades, and these students performed significantly better than students from schools that followed national and state curricula. It would be worth exploring this further in a follow-up study to confirm these findings with a larger sample size.

Student Performance in the Low-Cost Schools

Unfortunately, economically disadvantaged students in the sample had not acquired reading skills in English even by Grade 5. The teacher data indicated that they were only expected to read in Grades 5–8, so they set no goals for reading for elementary grades. The focus was on acquiring skills like copying from the board, read aloud/choral reading, practice drills and spelling drills, without focusing on the component skills required to acquire reading. The push towards English does not seem to be working in this context for three reasons: (a) students do not have a continuity of access to English literacy at home as they are first generation school-goers, (b) the English teachers and role models in school are not proficient in the language, (c) they have no bilingual supports to use their L1 to enhance their L2 acquisition.

These results are similar to Ramanathan & Atkinson’s (1999) finding that English remains inaccessible to students from low SES backgrounds and is still maintained as a language of the elite in India, resulting in a class divide. In an more recent study conducted by Shenoy & Pearson (2018), in the same part of the country, the researchers had measured oral language precursors to reading and found that students from low-cost schools in Bangalore were performing inadequately on both English (L2) and Kannada (L1) measures in elementary grades, indicating heritage language loss as well as no gains being made in the new language. The researchers had also suggested that low-cost schools consider late-exit bilingual programs, where students are instructed in their L1 for 100% of the time in kindergarten, 70% in their L1 and 30% in English in Grade 1, 50% in L1 and 50% in L2 in Grade 2 etc. until they are instructed in English for 100% of the time, by the end of elementary school, in Grade 5. This approach uses the students’ L1 to facilitate their L2 acquisition, and they learn foundational skills in their L1 that can be transferred to their L2 (Cummins, 1991), but the school management quickly rejected this recommendation, because of the push towards English-medium instruction right from kindergarten.

Moreover, Nakamura et al. (2018) found that being exposed to English early on might have poor consequences on literacy acquisition compared to waiting for students to develop a basic threshold reading level in their native language before transitioning to English. According to their analysis, the researchers reported that 20% of students were not ready to be taught in English even by Grade 5.

Implications for Research and Practice

The measures used in the study showed an overall moderate to high level of reliability and can be utilized by other researchers to study reading development in this context. The reliability of measures seemed to be highest for grade 1 and decreased slightly for Grades 3 and 5; this could be attributed to the limited number of subtests that were administered in those grades. Adding a few more fluency, vocabulary and comprehension measures for those grades could increase the reliability.

The push for English-immersion in India is pervasive across low-cost, middle-cost and high-cost private schools in the country. While students from middle and high-cost schools have access to resources outside of school (e.g. English literacy practices at home, access to libraries and tutorial services), students from low-cost schools do not have access to any additional supports to help them acquire English. Moreover, the teachers that are employed at these schools are poor English role models themselves, and do not have any practices in place that would facilitate L2 language and reading acquisition.

Though we had a small sample of students attending Montessori schools, the results indicate that phonics-based approaches to teaching reading improve scores dramatically. And though our population came from higher SES backgrounds, Dixon et al. (2011) and Gupta (2014) found evidence of phonics-based approaches dramatically improving reading scores even for students from slums and rural areas in India. One of the outcomes from our study is to present evidence that supports a systemic change of curriculum in low-cost schools, and a move towards providing them with phonics-based approaches to teaching reading.

Finally, given the number of languages that students are exposed to in this context, it would be helpful to create a universal skill requirement for when these languages are introduced in school and what purposes they serve (e.g. learning conversational Kannada vs. academic English). This will give assessors a better way of measuring proficiency in these languages if we know that all students have been exposed to them for 2 years vs. 5 years etc.

Limitations and Future Directions

We chose the DIBELS and easyCBM progress-monitoring tools as a first step by looking at approaches widely used in the US. This study attempted to extend their use and observe their efficacy in Indian schools, but we do not know if they are the most optimal tools for this context or whether other approaches might be better. These tools did not include an oral language component, but rather were measures of decoding, reading fluency and reading comprehension. Therefore, the results of the study need to be viewed within the parameters of this limitation. A future direction would be to compare and contrast various assessment tools within this multilingual context and figure out the optimal tool that meets the needs of Indian students.

Secondly, we had a small sample of students from the Montessori school. A future direction for research would be a follow-up study involving a larger population of students from Montessori schools, as well as implementing phonics-based approaches in low-cost schools and commenting on the efficacy of these programs. This direction would not only stress the importance of phonics-based approaches to teaching reading, but would play a paramount role in impacting a systemic change in policies for reading instruction in schools as well as introducing more rigorous teacher training programs.

Thirdly, we did not have an expressive or receptive oral language measure in L2 English. We could not find one that was normed on an Indian population, and we could not control for the cultural bias, especially in pictures aimed at measuring expressive vocabulary, in other tests that were normed in the US and other countries. We did however have a rating scale that researchers used to chart out a student’s language proficiency based on observations. A future direction would be to develop and norm a language proficiency test in the Indian context.

A fourth limitation was our inability to capture language proficiency data across the multiple languages that the child is exposed to in their school and home environments. The main reason was because all schools in our sample introduced English in kindergarten, but Hindi and Kannada were introduced at different elementary grades in different school sites. Moreover, children in our sample came from multiple home language backgrounds and an equivalent language measure would mean hand-picking students to take part in our study rather than getting a holistic picture of reading acquisition by gathering data from all students. Our hypotheses were thus targeted at an exploratory study to measure L2 English reading development across the grades.

Finally, not having access to performance in dominant languages restricts our analyses of identifying reading disabilities in this population of students. A new assessment tool has recently been standardized on Indian students and has taken into consideration dominant language use. It is called the Dyslexia Assessment for Languages of India (DALI) developed by the National Brain Research Centre, India in South Asian Languages (unpublished). It has currently been standardized on four languages: Hindi, Marathi, Kannada and English, and has tools for teachers as well as a battery of tests for psychologists to identify reading disabilities in this population of students. A next step and future direction would be to consider using the DALI screeners and teacher rating scales to identify students at-risk for reading disabilities.

Conclusion

This study addressed a critical gap by measuring English reading acquisition in private schools in Bangalore, India, and set out to introduce progress-monitoring tools in this context. The results showed that regardless of the method of instruction, most students in the sample were able to read by the end of elementary school. Our results suggest that phonics-based approaches in earlier grades can accelerate this process and improve reading outcomes for these students. Moreover, students in low-cost schools would benefit from L1 instruction as well as a systematic reading curriculum to develop those skills in early elementary grades rather than middle or high school.

Footnotes

1

The DIBELS Next measures were normed on students from 1006 schools representing cities, suburbs, towns and rural parts of the US. 48% of students were female; 48% received free and reduced lunch; 62% were White, 22% were Hispanic; 8% were Black, 3% were Asian and 2% were Native American. The easyCBM measures were normed on 500 students from each of the four regions: Midwest, Northeast, Southeast and West for a total of 2000 students per measure per grade. Norms are represented by region and by gender and ethnicity (White Females, White Males, Non-White Females and Non-White Males) for each of the subtest scores.

References

  1. Adams MJ (1990). Beginning to read: Thinking and learning about print. Cambridge, Mass: MIT Press. [Google Scholar]
  2. Aggarwal S (1991). Three language formula: An educational problem. Gyan Publishing House. [Google Scholar]
  3. Anderson D, Alonzo J, Tindal G, Farley D, Irvin PS, Lai CF, … & Wray KA (2014). Technical Manual: easyCBM. Technical Report# 1408. Behavioral Research and Teaching Annual status of education report (2016) ASER Centre, New Delhi. [Google Scholar]
  4. Catts HW, & Kamhi AG (Eds.). (1999). Language and reading disabilities. Boston: Allyn and Bacon. [Google Scholar]
  5. Cummins J (1991). Interdependence of first-and second-language proficiency in bilingual children. Language processing in bilingual children, 70–89. [Google Scholar]
  6. Crowder RG, & Wagner RK (1992). The psychology of reading: An introduction. Oxford University Press. [Google Scholar]
  7. Deno SL (2003). Developments in curriculum-based measurement. The journal of special education, 37(3), 184–192. [Google Scholar]
  8. Dixon P, Schagen I, & Seedhouse P (2011). The impact of an intervention on children’s reading and spelling ability in low-income schools in India. School Effectiveness and School Improvement, 22(4), 461–482. [Google Scholar]
  9. Dowd AJ, & Bartlett L (2019). The Need for Speed: Interrogating the Dominance of Oral Reading Fluency in International Reading Efforts. Comparative Education Review, 63(2), 189–212. [Google Scholar]
  10. Duke NK, Pressley M, Hilden K, Stone CA, & Silliman ER (2004). Difficulties with reading comprehension. Handbook of language and literacy: Development and disorders, 501–520. [Google Scholar]
  11. Ehri LC (2002). Reading processes, acquisition, and instructional implications. Dyslexia and literacy: Theory and practice, 167–186. [Google Scholar]
  12. Fuchs D, Fuchs LS, & Compton DL (2012). Smart RTI: A next-generation approach to multilevel prevention. Exceptional children, 78(3), 263–279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Gillon GT (2005). Phonological awareness: Effecting change through the integration of research findings. Language, Speech, and Hearing Services in Schools, 36(4), 346–349. [DOI] [PubMed] [Google Scholar]
  14. Goffreda CT, & DiPerna JC (2010). An empirical review of psychometric evidence for the Dynamic Indicators of Basic Early Literacy Skills. School Psychology Review, 39(3), 463–483. [Google Scholar]
  15. Good RH, Kaminski RA, & Cummings K (2011). DIBELS next. Cambium Learning. [Google Scholar]
  16. Gupta R (2014). Change in Teaching Practices: Case of Phonics Instruction in India. Procedia-Social and Behavioral Sciences, 116, 3911–3915. [Google Scholar]
  17. Haager D, & Windmueller MP (2001). Early reading intervention for English language learners at-risk for learning disabilities: Student and teacher outcomes in an urban school. Learning Disability Quarterly, 24(4), 235–250. [Google Scholar]
  18. Kaminski RA, & Good RH III. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25(2), 215–227. [Google Scholar]
  19. Keller-Margulis MA, Shapiro ES, & Hintze JM (2008). Long-term diagnostic accuracy of curriculum-based measures in reading and mathematics. School Psychology Review, 37(3), 374–390. [Google Scholar]
  20. Kurrien J (2005). Notes for the Meeting of the National Focus Group on Teaching of English, and Note on Introduction of English at the Primary Stage. Ms., NFG-English. [Google Scholar]
  21. LaBerge D, & Samuels SJ (1974). Toward a theory of automatic information processing in reading. Cognitive psychology, 6(2), 293–323. [Google Scholar]
  22. Munger KA, LoFaro SA, Kawryga EA, Sovocool EA, & Medina SY (2014). Does the Dynamic Indicators of Basic Early Literacy Skills Next Assessment Take a “Simple View” of Reading?. Educational Assessment, 19(3), 204–228. [Google Scholar]
  23. Nag-Arulmani S (2000): Types and Manifestations of Learning Difficulties in Indian Classrooms. Paper presented at the first Orientation Programme for Schoolteachers, National Institute for Public Co-operation and Child Development (NIPCCD), Bangalore, India. [Google Scholar]
  24. Nag-Arulmani S(2003). Reading difficulties in Indian languages. Dyslexia in different languages: Cross-linguistic comparisons, 235–254. [Google Scholar]
  25. Nakamura PR, de Hoop T, & Holla CU (2019). Language and the Learning Crisis: Evidence of Transfer Threshold Mechanisms in Multilingual Reading in South India. The Journal of Development Studies, 55(11), 2287–2305. [Google Scholar]
  26. National Council of Educational Research and Training (2011). National Curriculum Framework 2005 (No. id: 1138). [Google Scholar]
  27. National Reading Panel (US), National Institute of Child Health, & Human Development (US). (2000). Report of the national reading panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. National Institute of Child Health and Human Development, National Institutes of Health. [Google Scholar]
  28. Nese JF, Biancarosa G, Cummings K, Kennedy P, Alonzo J, & Tindal G (2013). In search of average growth: Describing within-year oral reading fluency growth across Grades 1–8. Journal of school psychology, 51(5), 625–642. [DOI] [PubMed] [Google Scholar]
  29. Perfetti CA (1985). Reading ability. Oxford University Press. [Google Scholar]
  30. Piller B, & Skillings MJ (2005). English Language Teaching Strategies Used by Primary Teachers in One New Delhi, India School. Tesl-Ej, 9(3), 1–23. [Google Scholar]
  31. Quirk M, & Beem S (2012). Examining the relations between reading fluency and reading comprehension for English language learners. Psychology in the Schools, 49(6), 539–553. [Google Scholar]
  32. Ramachandran V, Pal M, Jain S, Shekar S, & Sharma J (2005). Teacher motivation in India (pp. 96–103). Discussion Paper,(Azim Premji Foundation, Bangalore, 2005). [Google Scholar]
  33. Ramanathan H, & Bruning MD (2003). Reflection on Teaching Oral English Skills in India: A Research Report. Journal of the International Society for Teacher Education, 7(1), 48–55. [Google Scholar]
  34. Ramanathan H (2008). Testing of English in India: A developing concept. Language Testing, 25(1), 111–126. [Google Scholar]
  35. Ramanathan V, & Atkinson D (1999). Individualism, academic writing, and ESL writers. Journal of second language writing, 8(1), 45–75. [Google Scholar]
  36. Saini A (2000). Literacy and empowerment: An Indian scenario. Childhood Education, 76(6), 381–384. [Google Scholar]
  37. Scarborough HS (1998). Early identification of children at risk for reading disabilities: Phonological awareness and some other promising predictors. Specific reading disability: A view of the spectrum, 75–119. [Google Scholar]
  38. Share D, & Stanovich K (1995). Has the phonologial recording model of reading acquisition and reading disability led us astray. Issues in Education, 1, 1–57. [Google Scholar]
  39. Shenoy S & Pearson PD (2018). School culture and its impact on special education practices in Bangalore, India. Journal of the International Association of Special Education, 18(1), 49–63. [Google Scholar]
  40. Stahl SA, & Murray BA (1994). Defining phonological awareness and its relationship to early reading. Journal of educational Psychology, 86(2), 221–234. [Google Scholar]
  41. Wagner RK, Torgesen JK, & Rashotte CA (1994). Development of reading-related phonological processing abilities: New evidence of bidirectional causality from a latent variable longitudinal study. Developmental psychology, 30(1), 73–87. [Google Scholar]
  42. Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (2010). TOSREC: Test of silent reading efficiency and comprehension. Pro-Ed. [Google Scholar]
  43. Wolf M, & Katzir-Cohen T (2001). Reading fluency and its intervention. Scientific studies of reading, 5(3), 211–239. [Google Scholar]

RESOURCES